Feb 27 16:53:20 crc systemd[1]: Starting Kubernetes Kubelet... Feb 27 16:53:20 crc restorecon[4679]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:20 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Feb 27 16:53:21 crc restorecon[4679]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Feb 27 16:53:21 crc kubenswrapper[4708]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:53:21 crc kubenswrapper[4708]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Feb 27 16:53:21 crc kubenswrapper[4708]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:53:21 crc kubenswrapper[4708]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:53:21 crc kubenswrapper[4708]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 27 16:53:21 crc kubenswrapper[4708]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.935614 4708 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940795 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940915 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940928 4708 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940937 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940947 4708 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940957 4708 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940965 4708 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940975 4708 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940983 4708 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.940992 4708 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941001 4708 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941011 4708 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941019 4708 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941027 4708 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941036 4708 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941044 4708 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941052 4708 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941070 4708 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941079 4708 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941088 4708 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941095 4708 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941106 4708 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941116 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941124 4708 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941134 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941141 4708 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941149 4708 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941157 4708 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941165 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941173 4708 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941181 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941189 4708 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941197 4708 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941205 4708 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941216 4708 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941237 4708 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941245 4708 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941254 4708 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941263 4708 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941271 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941281 4708 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941289 4708 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941296 4708 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941304 4708 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941313 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941321 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941329 4708 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941338 4708 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941346 4708 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941357 4708 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941367 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941376 4708 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941385 4708 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941394 4708 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941402 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941410 4708 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941417 4708 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941425 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941432 4708 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941440 4708 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941447 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941455 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941467 4708 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941476 4708 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941484 4708 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941493 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941502 4708 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941511 4708 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941520 4708 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941529 4708 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.941537 4708 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942281 4708 flags.go:64] FLAG: --address="0.0.0.0" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942304 4708 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942321 4708 flags.go:64] FLAG: --anonymous-auth="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942333 4708 flags.go:64] FLAG: --application-metrics-count-limit="100" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942344 4708 flags.go:64] FLAG: --authentication-token-webhook="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942354 4708 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942366 4708 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942376 4708 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942386 4708 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942394 4708 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942404 4708 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942413 4708 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942422 4708 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942431 4708 flags.go:64] FLAG: --cgroup-root="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942440 4708 flags.go:64] FLAG: --cgroups-per-qos="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942449 4708 flags.go:64] FLAG: --client-ca-file="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942458 4708 flags.go:64] FLAG: --cloud-config="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942467 4708 flags.go:64] FLAG: --cloud-provider="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942476 4708 flags.go:64] FLAG: --cluster-dns="[]" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942488 4708 flags.go:64] FLAG: --cluster-domain="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942497 4708 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942507 4708 flags.go:64] FLAG: --config-dir="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942517 4708 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942527 4708 flags.go:64] FLAG: --container-log-max-files="5" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942537 4708 flags.go:64] FLAG: --container-log-max-size="10Mi" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942546 4708 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942555 4708 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942564 4708 flags.go:64] FLAG: --containerd-namespace="k8s.io" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942573 4708 flags.go:64] FLAG: --contention-profiling="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942582 4708 flags.go:64] FLAG: --cpu-cfs-quota="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942591 4708 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942601 4708 flags.go:64] FLAG: --cpu-manager-policy="none" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942610 4708 flags.go:64] FLAG: --cpu-manager-policy-options="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942621 4708 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942629 4708 flags.go:64] FLAG: --enable-controller-attach-detach="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942638 4708 flags.go:64] FLAG: --enable-debugging-handlers="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942647 4708 flags.go:64] FLAG: --enable-load-reader="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942656 4708 flags.go:64] FLAG: --enable-server="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942665 4708 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942676 4708 flags.go:64] FLAG: --event-burst="100" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942685 4708 flags.go:64] FLAG: --event-qps="50" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942694 4708 flags.go:64] FLAG: --event-storage-age-limit="default=0" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942703 4708 flags.go:64] FLAG: --event-storage-event-limit="default=0" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942711 4708 flags.go:64] FLAG: --eviction-hard="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942722 4708 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942730 4708 flags.go:64] FLAG: --eviction-minimum-reclaim="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942739 4708 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942748 4708 flags.go:64] FLAG: --eviction-soft="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942757 4708 flags.go:64] FLAG: --eviction-soft-grace-period="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942765 4708 flags.go:64] FLAG: --exit-on-lock-contention="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942774 4708 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942782 4708 flags.go:64] FLAG: --experimental-mounter-path="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942791 4708 flags.go:64] FLAG: --fail-cgroupv1="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942799 4708 flags.go:64] FLAG: --fail-swap-on="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942808 4708 flags.go:64] FLAG: --feature-gates="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942818 4708 flags.go:64] FLAG: --file-check-frequency="20s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942827 4708 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942836 4708 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942872 4708 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942882 4708 flags.go:64] FLAG: --healthz-port="10248" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942890 4708 flags.go:64] FLAG: --help="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942899 4708 flags.go:64] FLAG: --hostname-override="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942908 4708 flags.go:64] FLAG: --housekeeping-interval="10s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942918 4708 flags.go:64] FLAG: --http-check-frequency="20s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942927 4708 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942935 4708 flags.go:64] FLAG: --image-credential-provider-config="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942944 4708 flags.go:64] FLAG: --image-gc-high-threshold="85" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942953 4708 flags.go:64] FLAG: --image-gc-low-threshold="80" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942964 4708 flags.go:64] FLAG: --image-service-endpoint="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942974 4708 flags.go:64] FLAG: --kernel-memcg-notification="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942982 4708 flags.go:64] FLAG: --kube-api-burst="100" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.942991 4708 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943001 4708 flags.go:64] FLAG: --kube-api-qps="50" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943010 4708 flags.go:64] FLAG: --kube-reserved="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943019 4708 flags.go:64] FLAG: --kube-reserved-cgroup="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943027 4708 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943036 4708 flags.go:64] FLAG: --kubelet-cgroups="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943045 4708 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943054 4708 flags.go:64] FLAG: --lock-file="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943063 4708 flags.go:64] FLAG: --log-cadvisor-usage="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943071 4708 flags.go:64] FLAG: --log-flush-frequency="5s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943080 4708 flags.go:64] FLAG: --log-json-info-buffer-size="0" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943094 4708 flags.go:64] FLAG: --log-json-split-stream="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943102 4708 flags.go:64] FLAG: --log-text-info-buffer-size="0" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943112 4708 flags.go:64] FLAG: --log-text-split-stream="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943120 4708 flags.go:64] FLAG: --logging-format="text" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943129 4708 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943139 4708 flags.go:64] FLAG: --make-iptables-util-chains="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943147 4708 flags.go:64] FLAG: --manifest-url="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943156 4708 flags.go:64] FLAG: --manifest-url-header="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943167 4708 flags.go:64] FLAG: --max-housekeeping-interval="15s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943176 4708 flags.go:64] FLAG: --max-open-files="1000000" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943187 4708 flags.go:64] FLAG: --max-pods="110" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943195 4708 flags.go:64] FLAG: --maximum-dead-containers="-1" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943206 4708 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943215 4708 flags.go:64] FLAG: --memory-manager-policy="None" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943224 4708 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943233 4708 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943241 4708 flags.go:64] FLAG: --node-ip="192.168.126.11" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943251 4708 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943269 4708 flags.go:64] FLAG: --node-status-max-images="50" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943280 4708 flags.go:64] FLAG: --node-status-update-frequency="10s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943289 4708 flags.go:64] FLAG: --oom-score-adj="-999" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943298 4708 flags.go:64] FLAG: --pod-cidr="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943307 4708 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943321 4708 flags.go:64] FLAG: --pod-manifest-path="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943330 4708 flags.go:64] FLAG: --pod-max-pids="-1" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943339 4708 flags.go:64] FLAG: --pods-per-core="0" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943347 4708 flags.go:64] FLAG: --port="10250" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943356 4708 flags.go:64] FLAG: --protect-kernel-defaults="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943365 4708 flags.go:64] FLAG: --provider-id="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943373 4708 flags.go:64] FLAG: --qos-reserved="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943382 4708 flags.go:64] FLAG: --read-only-port="10255" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943391 4708 flags.go:64] FLAG: --register-node="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943400 4708 flags.go:64] FLAG: --register-schedulable="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943409 4708 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943423 4708 flags.go:64] FLAG: --registry-burst="10" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943432 4708 flags.go:64] FLAG: --registry-qps="5" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943441 4708 flags.go:64] FLAG: --reserved-cpus="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943449 4708 flags.go:64] FLAG: --reserved-memory="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943460 4708 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943469 4708 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943479 4708 flags.go:64] FLAG: --rotate-certificates="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943488 4708 flags.go:64] FLAG: --rotate-server-certificates="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943498 4708 flags.go:64] FLAG: --runonce="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943507 4708 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943518 4708 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943528 4708 flags.go:64] FLAG: --seccomp-default="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943537 4708 flags.go:64] FLAG: --serialize-image-pulls="true" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943546 4708 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943556 4708 flags.go:64] FLAG: --storage-driver-db="cadvisor" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943564 4708 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943573 4708 flags.go:64] FLAG: --storage-driver-password="root" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943582 4708 flags.go:64] FLAG: --storage-driver-secure="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943591 4708 flags.go:64] FLAG: --storage-driver-table="stats" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943601 4708 flags.go:64] FLAG: --storage-driver-user="root" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943609 4708 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943618 4708 flags.go:64] FLAG: --sync-frequency="1m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943627 4708 flags.go:64] FLAG: --system-cgroups="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943636 4708 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943651 4708 flags.go:64] FLAG: --system-reserved-cgroup="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943660 4708 flags.go:64] FLAG: --tls-cert-file="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943668 4708 flags.go:64] FLAG: --tls-cipher-suites="[]" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943679 4708 flags.go:64] FLAG: --tls-min-version="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943687 4708 flags.go:64] FLAG: --tls-private-key-file="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943696 4708 flags.go:64] FLAG: --topology-manager-policy="none" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943705 4708 flags.go:64] FLAG: --topology-manager-policy-options="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943713 4708 flags.go:64] FLAG: --topology-manager-scope="container" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943722 4708 flags.go:64] FLAG: --v="2" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943733 4708 flags.go:64] FLAG: --version="false" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943744 4708 flags.go:64] FLAG: --vmodule="" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943754 4708 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.943764 4708 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944669 4708 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944684 4708 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944693 4708 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944702 4708 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944713 4708 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944724 4708 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944734 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944744 4708 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944755 4708 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944766 4708 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944775 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944784 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944793 4708 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944802 4708 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944831 4708 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944840 4708 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944877 4708 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944888 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944896 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944905 4708 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944915 4708 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944923 4708 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944931 4708 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944940 4708 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944948 4708 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944957 4708 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944965 4708 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944973 4708 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944981 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944989 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.944998 4708 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945006 4708 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945014 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945022 4708 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945030 4708 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945039 4708 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945047 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945055 4708 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945063 4708 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945071 4708 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945079 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945087 4708 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945095 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945103 4708 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945111 4708 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945120 4708 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945128 4708 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945136 4708 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945149 4708 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945157 4708 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945165 4708 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945173 4708 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945181 4708 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945189 4708 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945197 4708 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945205 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945213 4708 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945221 4708 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945229 4708 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945238 4708 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945246 4708 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945254 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945262 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945270 4708 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945278 4708 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945285 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945293 4708 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945301 4708 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945309 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945316 4708 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.945324 4708 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.946171 4708 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.961205 4708 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.961257 4708 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961410 4708 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961426 4708 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961437 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961449 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961459 4708 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961467 4708 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961475 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961484 4708 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961492 4708 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961501 4708 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961509 4708 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961518 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961525 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961536 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961545 4708 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961554 4708 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961564 4708 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961573 4708 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961582 4708 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961591 4708 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961599 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961606 4708 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961614 4708 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961622 4708 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961630 4708 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961639 4708 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961646 4708 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961654 4708 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961661 4708 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961670 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961678 4708 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961686 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961693 4708 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961701 4708 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961713 4708 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961727 4708 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961737 4708 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961747 4708 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961758 4708 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961768 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961776 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961785 4708 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961794 4708 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961803 4708 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961813 4708 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961821 4708 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961830 4708 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961838 4708 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961870 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961878 4708 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961887 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961895 4708 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961905 4708 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961915 4708 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961923 4708 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961931 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961939 4708 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961947 4708 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961954 4708 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961962 4708 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961970 4708 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961978 4708 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961985 4708 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.961993 4708 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962001 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962009 4708 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962017 4708 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962028 4708 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962036 4708 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962045 4708 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962056 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.962069 4708 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962334 4708 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962356 4708 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962369 4708 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962379 4708 feature_gate.go:330] unrecognized feature gate: SignatureStores Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962390 4708 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962400 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962412 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962422 4708 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962430 4708 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962438 4708 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962445 4708 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962453 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962461 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962469 4708 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962477 4708 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962485 4708 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962493 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962501 4708 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962509 4708 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962518 4708 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962525 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962533 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962541 4708 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962549 4708 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962557 4708 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962564 4708 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962574 4708 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962585 4708 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962593 4708 feature_gate.go:330] unrecognized feature gate: Example Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962603 4708 feature_gate.go:330] unrecognized feature gate: PinnedImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962611 4708 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962619 4708 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962627 4708 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962634 4708 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962654 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962662 4708 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962670 4708 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962677 4708 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962684 4708 feature_gate.go:330] unrecognized feature gate: GatewayAPI Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962692 4708 feature_gate.go:330] unrecognized feature gate: PlatformOperators Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962703 4708 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962712 4708 feature_gate.go:330] unrecognized feature gate: NewOLM Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962719 4708 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962729 4708 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962737 4708 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962745 4708 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962752 4708 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962761 4708 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962769 4708 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962776 4708 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962785 4708 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962793 4708 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962800 4708 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962808 4708 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962816 4708 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962824 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962832 4708 feature_gate.go:330] unrecognized feature gate: InsightsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962840 4708 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962871 4708 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962879 4708 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962887 4708 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962895 4708 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962903 4708 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962911 4708 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962918 4708 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962927 4708 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962935 4708 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962942 4708 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962950 4708 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962961 4708 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Feb 27 16:53:21 crc kubenswrapper[4708]: W0227 16:53:21.962982 4708 feature_gate.go:330] unrecognized feature gate: OVNObservability Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.962995 4708 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.964370 4708 server.go:940] "Client rotation is on, will bootstrap in background" Feb 27 16:53:21 crc kubenswrapper[4708]: E0227 16:53:21.970578 4708 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2026-02-24 05:52:08 +0000 UTC" logger="UnhandledError" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.975340 4708 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.975484 4708 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.977820 4708 server.go:997] "Starting client certificate rotation" Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.977895 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Feb 27 16:53:21 crc kubenswrapper[4708]: I0227 16:53:21.979950 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.003410 4708 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.009672 4708 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.012531 4708 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.033485 4708 log.go:25] "Validated CRI v1 runtime API" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.076685 4708 log.go:25] "Validated CRI v1 image API" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.079240 4708 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.087297 4708 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-02-27-16-48-50-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.087340 4708 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.114312 4708 manager.go:217] Machine: {Timestamp:2026-02-27 16:53:22.110785488 +0000 UTC m=+0.626583155 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:b0138667-dee2-429c-83f0-feff19c38749 BootID:ab7c2cd5-c0bb-486f-8dae-402228064a6a Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:01:31:31 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:01:31:31 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:f2:60:c7 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:49:38:e5 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:95:ea:81 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:85:e1:d4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:f1:d7:a9:65:47 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:1e:3b:4f:4d:d9:84 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.114683 4708 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.114958 4708 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.115382 4708 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.115674 4708 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.115726 4708 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.116079 4708 topology_manager.go:138] "Creating topology manager with none policy" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.116097 4708 container_manager_linux.go:303] "Creating device plugin manager" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.116658 4708 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.116709 4708 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.117469 4708 state_mem.go:36] "Initialized new in-memory state store" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.117624 4708 server.go:1245] "Using root directory" path="/var/lib/kubelet" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.121776 4708 kubelet.go:418] "Attempting to sync node with API server" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.121812 4708 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.121877 4708 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.121898 4708 kubelet.go:324] "Adding apiserver pod source" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.121917 4708 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.127167 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.127247 4708 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.127510 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.127577 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.127680 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.128548 4708 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.133350 4708 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136114 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136168 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136185 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136201 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136226 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136240 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136255 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136280 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136298 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136315 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136348 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.136364 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.137310 4708 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.138082 4708 server.go:1280] "Started kubelet" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.138398 4708 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.138498 4708 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.138837 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.139337 4708 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 27 16:53:22 crc systemd[1]: Started Kubernetes Kubelet. Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.142065 4708 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.142124 4708 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.142268 4708 server.go:460] "Adding debug handlers to kubelet server" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.142640 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.142727 4708 volume_manager.go:287] "The desired_state_of_world populator starts" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.142741 4708 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.142804 4708 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.143946 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.144457 4708 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.144493 4708 factory.go:55] Registering systemd factory Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.144512 4708 factory.go:221] Registration of the systemd container factory successfully Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.144969 4708 factory.go:153] Registering CRI-O factory Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.145007 4708 factory.go:221] Registration of the crio container factory successfully Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.145042 4708 factory.go:103] Registering Raw factory Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.145067 4708 manager.go:1196] Started watching for new ooms in manager Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.145162 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.145249 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.146110 4708 manager.go:319] Starting recovery of all containers Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.145694 4708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189828b2e267cdd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,LastTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.169689 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170245 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170288 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170321 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170354 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170386 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170417 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170446 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170481 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170510 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170541 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170578 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170612 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170652 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170680 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170709 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170739 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170769 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170800 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170835 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170907 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170936 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170967 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.170994 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171025 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171054 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171093 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171134 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171168 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171208 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171237 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171271 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171309 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171345 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171375 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171408 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171439 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171471 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171502 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171528 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171556 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171585 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171616 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171646 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171677 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171712 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171742 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171772 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171799 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.171833 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172038 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172081 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172121 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172161 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172195 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172229 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172262 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172297 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172326 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172356 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172385 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172419 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172453 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172482 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172515 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172548 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172579 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172608 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172639 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172670 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172728 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172762 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172792 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172821 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172891 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172927 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172959 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.172986 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.173018 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.173045 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.173075 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.173110 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.173140 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.173171 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175371 4708 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175440 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175479 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175513 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175543 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175572 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175602 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175631 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175659 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175691 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175719 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175749 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175777 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175811 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175909 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175939 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175967 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.175998 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176028 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176054 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176081 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176128 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176160 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176198 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176236 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176269 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176298 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176330 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176362 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176397 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176430 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176464 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176496 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176527 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176557 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176587 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176617 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176645 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176672 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176743 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176770 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176802 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176881 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176904 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176927 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176950 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176972 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.176996 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177016 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177041 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177062 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177084 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177108 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177130 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177152 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177172 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177196 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177218 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177265 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177288 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177308 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177335 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177359 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177422 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177448 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177472 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177496 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177518 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177541 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177565 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177585 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177606 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177627 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177651 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177676 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177698 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177719 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177743 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177762 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177785 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177806 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177829 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177872 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177893 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177914 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177935 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177957 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177978 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.177998 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178020 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178044 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178065 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178085 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178110 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178129 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178150 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178172 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178193 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178214 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178236 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178260 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178284 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178311 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178334 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178354 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178377 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178399 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178421 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178443 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178466 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178489 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178510 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178532 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178556 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178577 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178599 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178619 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178639 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178661 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178681 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178703 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178735 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178771 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178793 4708 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178812 4708 reconstruct.go:97] "Volume reconstruction finished" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.178827 4708 reconciler.go:26] "Reconciler: start to sync state" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.179246 4708 manager.go:324] Recovery completed Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.193812 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.196043 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.196102 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.196122 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.197243 4708 cpu_manager.go:225] "Starting CPU manager" policy="none" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.197265 4708 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.197357 4708 state_mem.go:36] "Initialized new in-memory state store" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.220758 4708 policy_none.go:49] "None policy: Start" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.221817 4708 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.221889 4708 state_mem.go:35] "Initializing new in-memory state store" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.222981 4708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.226967 4708 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.227053 4708 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.227102 4708 kubelet.go:2335] "Starting kubelet main sync loop" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.227198 4708 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.228563 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.228636 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.243006 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.285412 4708 manager.go:334] "Starting Device Plugin manager" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.285875 4708 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.285903 4708 server.go:79] "Starting device plugin registration server" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.286473 4708 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.286497 4708 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.286773 4708 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.286975 4708 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.286988 4708 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.296090 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.327370 4708 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.327514 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.328936 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.328996 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.329016 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.329215 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.329511 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.329572 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.330530 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.330596 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.330610 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.330641 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.330668 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.330684 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.330910 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.331041 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.331084 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.332428 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.332469 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.332487 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.332540 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.332619 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.332642 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.332942 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.333103 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.333162 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.334612 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.334661 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.334679 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.334677 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.334737 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.334759 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.334981 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.335596 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.335656 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340266 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340321 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340344 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340642 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340673 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340692 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340715 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.340739 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.385105 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.385157 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.385177 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.385687 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.386672 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.387403 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.387449 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.387468 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.387504 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.388242 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486463 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486539 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486578 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486617 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486652 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486687 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486722 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486755 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486788 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486823 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486889 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486922 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486953 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.486984 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.487018 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588268 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588325 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588361 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588399 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588457 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588487 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588494 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588518 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588552 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588604 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588604 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588640 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588645 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588742 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588646 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588675 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588618 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588977 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589017 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589050 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588675 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589120 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589090 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589168 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589168 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589162 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589221 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.588755 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589267 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589291 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.589443 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.590194 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.590256 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.590283 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.590329 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.590799 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.705983 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.714417 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.733167 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.750202 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.757640 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.766213 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-83bde37131666cf7d73cca96c85a68a6a9751438fee2812feeea51876f24e81c WatchSource:0}: Error finding container 83bde37131666cf7d73cca96c85a68a6a9751438fee2812feeea51876f24e81c: Status 404 returned error can't find the container with id 83bde37131666cf7d73cca96c85a68a6a9751438fee2812feeea51876f24e81c Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.768606 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-e19f47840cdce1ea22eb44fab638c9269814f6fede8eb5420df47391daef4bae WatchSource:0}: Error finding container e19f47840cdce1ea22eb44fab638c9269814f6fede8eb5420df47391daef4bae: Status 404 returned error can't find the container with id e19f47840cdce1ea22eb44fab638c9269814f6fede8eb5420df47391daef4bae Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.779183 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-62e3a84e0ce57613017e69239e052f34d6ed871ca0df79f2e611f818504de557 WatchSource:0}: Error finding container 62e3a84e0ce57613017e69239e052f34d6ed871ca0df79f2e611f818504de557: Status 404 returned error can't find the container with id 62e3a84e0ce57613017e69239e052f34d6ed871ca0df79f2e611f818504de557 Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.787296 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.788084 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-eb01ed3ccb8da968fbe531ea861687b9032c2339a8a7ba7c78b18cc4c5143250 WatchSource:0}: Error finding container eb01ed3ccb8da968fbe531ea861687b9032c2339a8a7ba7c78b18cc4c5143250: Status 404 returned error can't find the container with id eb01ed3ccb8da968fbe531ea861687b9032c2339a8a7ba7c78b18cc4c5143250 Feb 27 16:53:22 crc kubenswrapper[4708]: W0227 16:53:22.798877 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-933b977b2b20b1307639cf2c3e213dbbc66089a68755d9127f3eabefb2818d40 WatchSource:0}: Error finding container 933b977b2b20b1307639cf2c3e213dbbc66089a68755d9127f3eabefb2818d40: Status 404 returned error can't find the container with id 933b977b2b20b1307639cf2c3e213dbbc66089a68755d9127f3eabefb2818d40 Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.991440 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.994982 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.995036 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.995058 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:22 crc kubenswrapper[4708]: I0227 16:53:22.995105 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:22 crc kubenswrapper[4708]: E0227 16:53:22.995717 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.140011 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.236392 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"933b977b2b20b1307639cf2c3e213dbbc66089a68755d9127f3eabefb2818d40"} Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.238101 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"eb01ed3ccb8da968fbe531ea861687b9032c2339a8a7ba7c78b18cc4c5143250"} Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.240288 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"62e3a84e0ce57613017e69239e052f34d6ed871ca0df79f2e611f818504de557"} Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.244212 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e19f47840cdce1ea22eb44fab638c9269814f6fede8eb5420df47391daef4bae"} Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.245733 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"83bde37131666cf7d73cca96c85a68a6a9751438fee2812feeea51876f24e81c"} Feb 27 16:53:23 crc kubenswrapper[4708]: W0227 16:53:23.270374 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:23 crc kubenswrapper[4708]: E0227 16:53:23.270556 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:23 crc kubenswrapper[4708]: W0227 16:53:23.275809 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:23 crc kubenswrapper[4708]: E0227 16:53:23.275934 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:23 crc kubenswrapper[4708]: W0227 16:53:23.418123 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:23 crc kubenswrapper[4708]: E0227 16:53:23.418272 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:23 crc kubenswrapper[4708]: E0227 16:53:23.588156 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Feb 27 16:53:23 crc kubenswrapper[4708]: W0227 16:53:23.719181 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:23 crc kubenswrapper[4708]: E0227 16:53:23.719316 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.796073 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.798155 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.798205 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.798220 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:23 crc kubenswrapper[4708]: I0227 16:53:23.798260 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:23 crc kubenswrapper[4708]: E0227 16:53:23.798903 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.140354 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.174650 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:53:24 crc kubenswrapper[4708]: E0227 16:53:24.176332 4708 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.252199 4708 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4" exitCode=0 Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.252296 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4"} Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.252472 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.254073 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.254124 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.254149 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.255345 4708 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04" exitCode=0 Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.255469 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04"} Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.255573 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.257423 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.257460 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.257479 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.258334 4708 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d" exitCode=0 Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.258443 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d"} Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.258502 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.260014 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.260054 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.260067 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.261180 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08" exitCode=0 Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.261215 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08"} Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.261325 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.264778 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.264813 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.264838 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.267674 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.267793 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7"} Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.268001 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed"} Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.268034 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd"} Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.269784 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.269837 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:24 crc kubenswrapper[4708]: I0227 16:53:24.269879 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.140612 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:25 crc kubenswrapper[4708]: E0227 16:53:25.188967 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.276402 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.277028 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.277053 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.285527 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.285614 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.287488 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.287533 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.287546 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.287876 4708 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f" exitCode=0 Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.288004 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.288026 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:25 crc kubenswrapper[4708]: W0227 16:53:25.289955 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:25 crc kubenswrapper[4708]: E0227 16:53:25.290034 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.290749 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.290784 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.290796 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.292253 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"15db669ad6a213f4d2cc324a27db72c0acd31a31110041ec13a3d5f814ec8824"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.292339 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.293479 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.293514 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.293529 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.298542 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.298579 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.298612 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef"} Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.298688 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.302289 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.304676 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.304696 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:25 crc kubenswrapper[4708]: E0227 16:53:25.329445 4708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.189828b2e267cdd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,LastTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.399279 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.400355 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.400381 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.400391 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.400414 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:25 crc kubenswrapper[4708]: E0227 16:53:25.400755 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.182:6443: connect: connection refused" node="crc" Feb 27 16:53:25 crc kubenswrapper[4708]: I0227 16:53:25.933956 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:25 crc kubenswrapper[4708]: W0227 16:53:25.990186 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:25 crc kubenswrapper[4708]: E0227 16:53:25.990359 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:26 crc kubenswrapper[4708]: W0227 16:53:26.019832 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.182:6443: connect: connection refused Feb 27 16:53:26 crc kubenswrapper[4708]: E0227 16:53:26.019956 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.182:6443: connect: connection refused" logger="UnhandledError" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.305004 4708 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2" exitCode=0 Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.305159 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.305092 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2"} Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.306354 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.306433 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.306492 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.310305 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7de39523312248e79aadc3cf3cb48ab796a323014c84d90a09e8f9ee4083b437"} Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.310351 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1"} Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.310513 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.310541 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.311704 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.311817 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.311892 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.312732 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.312779 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.312799 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.312953 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.313000 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.313027 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.312997 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.313114 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.313188 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.313212 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.313197 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:26 crc kubenswrapper[4708]: I0227 16:53:26.313447 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.322096 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"cf2dfb10bb5fd1ae500cc0cfa9273a5b6d35ebdf1beeb773749e1199a0f6c402"} Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.322180 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2db933753890a81185cb51437c74fb549f424d32b14f82bfc23c65c1f03656ce"} Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.322206 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"5ca78cbe511dd1d30e907cb00a8c308083f86e23e2d8418e20c97b1ab78014ae"} Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.322254 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.322260 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.322329 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.324048 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.324109 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.324164 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.324061 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.324283 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.324312 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.341978 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:27 crc kubenswrapper[4708]: I0227 16:53:27.350054 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.266977 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.330898 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"30dedd3741667e4539dbb93fae6bdf7a12469cabc64c281107dc9c1607cc4aa3"} Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.331776 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69f5d95290f15084ede686f64fe8c3d385247674568c1e1d742fc4e1d19dd4e2"} Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.331024 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.331155 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.331010 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334100 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334149 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334169 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334253 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334283 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334299 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334147 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334408 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.334424 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.397373 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.601787 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.603664 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.603932 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.603971 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.604026 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.934747 4708 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:53:28 crc kubenswrapper[4708]: I0227 16:53:28.934888 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.333831 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.334995 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.335328 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.335434 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.335733 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.335808 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.335826 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.336370 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.336509 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.336632 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.336944 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.337184 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.337277 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:29 crc kubenswrapper[4708]: I0227 16:53:29.816459 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:30 crc kubenswrapper[4708]: I0227 16:53:30.337209 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:30 crc kubenswrapper[4708]: I0227 16:53:30.338767 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:30 crc kubenswrapper[4708]: I0227 16:53:30.338885 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:30 crc kubenswrapper[4708]: I0227 16:53:30.338907 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.352379 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.352611 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.354005 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.354121 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.354177 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.915704 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.916091 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.918083 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.918141 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:31 crc kubenswrapper[4708]: I0227 16:53:31.918162 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:32 crc kubenswrapper[4708]: E0227 16:53:32.296436 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:53:32 crc kubenswrapper[4708]: I0227 16:53:32.537324 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:32 crc kubenswrapper[4708]: I0227 16:53:32.538343 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:32 crc kubenswrapper[4708]: I0227 16:53:32.540033 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:32 crc kubenswrapper[4708]: I0227 16:53:32.540100 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:32 crc kubenswrapper[4708]: I0227 16:53:32.540120 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:33 crc kubenswrapper[4708]: I0227 16:53:33.637956 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:33 crc kubenswrapper[4708]: I0227 16:53:33.638236 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:33 crc kubenswrapper[4708]: I0227 16:53:33.640426 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:33 crc kubenswrapper[4708]: I0227 16:53:33.640477 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:33 crc kubenswrapper[4708]: I0227 16:53:33.640498 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:33 crc kubenswrapper[4708]: I0227 16:53:33.646386 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.350409 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.351887 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.351943 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.351964 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.985546 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.985981 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.988121 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.988180 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:34 crc kubenswrapper[4708]: I0227 16:53:34.988199 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:36 crc kubenswrapper[4708]: I0227 16:53:36.140367 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Feb 27 16:53:36 crc kubenswrapper[4708]: W0227 16:53:36.177355 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Feb 27 16:53:36 crc kubenswrapper[4708]: I0227 16:53:36.177491 4708 trace.go:236] Trace[1459218092]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (27-Feb-2026 16:53:26.175) (total time: 10001ms): Feb 27 16:53:36 crc kubenswrapper[4708]: Trace[1459218092]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (16:53:36.177) Feb 27 16:53:36 crc kubenswrapper[4708]: Trace[1459218092]: [10.001757745s] [10.001757745s] END Feb 27 16:53:36 crc kubenswrapper[4708]: E0227 16:53:36.177528 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Feb 27 16:53:36 crc kubenswrapper[4708]: I0227 16:53:36.446072 4708 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Feb 27 16:53:36 crc kubenswrapper[4708]: I0227 16:53:36.446174 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.363948 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.367940 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7de39523312248e79aadc3cf3cb48ab796a323014c84d90a09e8f9ee4083b437" exitCode=255 Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.368000 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"7de39523312248e79aadc3cf3cb48ab796a323014c84d90a09e8f9ee4083b437"} Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.368207 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.369374 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.369442 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.369462 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.370399 4708 scope.go:117] "RemoveContainer" containerID="7de39523312248e79aadc3cf3cb48ab796a323014c84d90a09e8f9ee4083b437" Feb 27 16:53:37 crc kubenswrapper[4708]: W0227 16:53:37.534830 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z Feb 27 16:53:37 crc kubenswrapper[4708]: E0227 16:53:37.535000 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.536366 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z Feb 27 16:53:37 crc kubenswrapper[4708]: W0227 16:53:37.537843 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z Feb 27 16:53:37 crc kubenswrapper[4708]: E0227 16:53:37.537953 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:37 crc kubenswrapper[4708]: E0227 16:53:37.541376 4708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189828b2e267cdd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,LastTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:37 crc kubenswrapper[4708]: E0227 16:53:37.588976 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 16:53:37 crc kubenswrapper[4708]: E0227 16:53:37.589195 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z" interval="6.4s" Feb 27 16:53:37 crc kubenswrapper[4708]: W0227 16:53:37.589331 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z Feb 27 16:53:37 crc kubenswrapper[4708]: E0227 16:53:37.589405 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:37 crc kubenswrapper[4708]: E0227 16:53:37.590155 4708 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:37Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.601997 4708 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]log ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]etcd ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-filter ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-apiextensions-informers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-system-namespaces-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-kube-aggregator-informers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]autoregister-completion failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapi-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: livez check failed Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.602048 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.607478 4708 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]log ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]etcd ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-filter ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-apiextensions-informers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-system-namespaces-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/start-kube-aggregator-informers ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 27 16:53:37 crc kubenswrapper[4708]: [-]autoregister-completion failed: reason withheld Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapi-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 27 16:53:37 crc kubenswrapper[4708]: livez check failed Feb 27 16:53:37 crc kubenswrapper[4708]: I0227 16:53:37.607506 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.144258 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:38Z is after 2026-02-23T05:33:13Z Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.372977 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.375155 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0"} Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.375407 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.376669 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.376711 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.376722 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.934839 4708 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:53:38 crc kubenswrapper[4708]: I0227 16:53:38.934934 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.144947 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:39Z is after 2026-02-23T05:33:13Z Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.380884 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.381607 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.384719 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" exitCode=255 Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.384780 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0"} Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.384876 4708 scope.go:117] "RemoveContainer" containerID="7de39523312248e79aadc3cf3cb48ab796a323014c84d90a09e8f9ee4083b437" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.385023 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.386386 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.386435 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.386454 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.387303 4708 scope.go:117] "RemoveContainer" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" Feb 27 16:53:39 crc kubenswrapper[4708]: E0227 16:53:39.387621 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:53:39 crc kubenswrapper[4708]: I0227 16:53:39.822776 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.144950 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:40Z is after 2026-02-23T05:33:13Z Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.391771 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.395168 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.396719 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.396789 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.396809 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.397657 4708 scope.go:117] "RemoveContainer" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" Feb 27 16:53:40 crc kubenswrapper[4708]: E0227 16:53:40.397976 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:53:40 crc kubenswrapper[4708]: I0227 16:53:40.403458 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:41 crc kubenswrapper[4708]: I0227 16:53:41.143983 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:41Z is after 2026-02-23T05:33:13Z Feb 27 16:53:41 crc kubenswrapper[4708]: I0227 16:53:41.403480 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:41 crc kubenswrapper[4708]: I0227 16:53:41.404988 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:41 crc kubenswrapper[4708]: I0227 16:53:41.405041 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:41 crc kubenswrapper[4708]: I0227 16:53:41.405060 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:41 crc kubenswrapper[4708]: I0227 16:53:41.405936 4708 scope.go:117] "RemoveContainer" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" Feb 27 16:53:41 crc kubenswrapper[4708]: E0227 16:53:41.406214 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:53:42 crc kubenswrapper[4708]: W0227 16:53:42.127578 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:42Z is after 2026-02-23T05:33:13Z Feb 27 16:53:42 crc kubenswrapper[4708]: E0227 16:53:42.127661 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:42Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:42 crc kubenswrapper[4708]: I0227 16:53:42.141943 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:42Z is after 2026-02-23T05:33:13Z Feb 27 16:53:42 crc kubenswrapper[4708]: E0227 16:53:42.296521 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:53:43 crc kubenswrapper[4708]: I0227 16:53:43.144628 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:43Z is after 2026-02-23T05:33:13Z Feb 27 16:53:43 crc kubenswrapper[4708]: I0227 16:53:43.989146 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:43 crc kubenswrapper[4708]: I0227 16:53:43.990905 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:43 crc kubenswrapper[4708]: I0227 16:53:43.990962 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:43 crc kubenswrapper[4708]: I0227 16:53:43.990981 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:43 crc kubenswrapper[4708]: I0227 16:53:43.991017 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:43 crc kubenswrapper[4708]: E0227 16:53:43.994739 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:43Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 16:53:43 crc kubenswrapper[4708]: E0227 16:53:43.998328 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:43Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 16:53:44 crc kubenswrapper[4708]: I0227 16:53:44.143910 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:44Z is after 2026-02-23T05:33:13Z Feb 27 16:53:44 crc kubenswrapper[4708]: I0227 16:53:44.404319 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:44 crc kubenswrapper[4708]: I0227 16:53:44.405379 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:44 crc kubenswrapper[4708]: I0227 16:53:44.407530 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:44 crc kubenswrapper[4708]: I0227 16:53:44.407602 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:44 crc kubenswrapper[4708]: I0227 16:53:44.407623 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:44 crc kubenswrapper[4708]: I0227 16:53:44.408922 4708 scope.go:117] "RemoveContainer" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" Feb 27 16:53:44 crc kubenswrapper[4708]: E0227 16:53:44.409261 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.023725 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.024016 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.025996 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.026042 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.026061 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.043183 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.161778 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:45Z is after 2026-02-23T05:33:13Z Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.414365 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.416011 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.416076 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:45 crc kubenswrapper[4708]: I0227 16:53:45.416097 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.143763 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:46Z is after 2026-02-23T05:33:13Z Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.217184 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:53:46 crc kubenswrapper[4708]: E0227 16:53:46.222576 4708 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:46Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.445754 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.446011 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.447370 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.447420 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.447438 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:46 crc kubenswrapper[4708]: I0227 16:53:46.448302 4708 scope.go:117] "RemoveContainer" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" Feb 27 16:53:46 crc kubenswrapper[4708]: E0227 16:53:46.448582 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:53:47 crc kubenswrapper[4708]: I0227 16:53:47.145017 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:47Z is after 2026-02-23T05:33:13Z Feb 27 16:53:47 crc kubenswrapper[4708]: W0227 16:53:47.505490 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:47Z is after 2026-02-23T05:33:13Z Feb 27 16:53:47 crc kubenswrapper[4708]: E0227 16:53:47.505592 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:47Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:47 crc kubenswrapper[4708]: E0227 16:53:47.546763 4708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:47Z is after 2026-02-23T05:33:13Z" event="&Event{ObjectMeta:{crc.189828b2e267cdd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,LastTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:48 crc kubenswrapper[4708]: W0227 16:53:48.081065 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z Feb 27 16:53:48 crc kubenswrapper[4708]: E0227 16:53:48.081612 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.143722 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.934647 4708 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.934738 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.934890 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.935089 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.936498 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.936545 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.936561 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.937261 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Feb 27 16:53:48 crc kubenswrapper[4708]: I0227 16:53:48.937498 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" containerID="cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed" gracePeriod=30 Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.145354 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:49Z is after 2026-02-23T05:33:13Z Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.431684 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.432318 4708 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed" exitCode=255 Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.432376 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed"} Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.432417 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f"} Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.432547 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.433900 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.433963 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:49 crc kubenswrapper[4708]: I0227 16:53:49.433985 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:49 crc kubenswrapper[4708]: W0227 16:53:49.520185 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:49Z is after 2026-02-23T05:33:13Z Feb 27 16:53:49 crc kubenswrapper[4708]: E0227 16:53:49.520284 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:49Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:50 crc kubenswrapper[4708]: I0227 16:53:50.143649 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:50Z is after 2026-02-23T05:33:13Z Feb 27 16:53:50 crc kubenswrapper[4708]: I0227 16:53:50.998966 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:51 crc kubenswrapper[4708]: E0227 16:53:50.999990 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:50Z is after 2026-02-23T05:33:13Z" interval="7s" Feb 27 16:53:51 crc kubenswrapper[4708]: I0227 16:53:51.000643 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:51 crc kubenswrapper[4708]: I0227 16:53:51.000700 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:51 crc kubenswrapper[4708]: I0227 16:53:51.000719 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:51 crc kubenswrapper[4708]: I0227 16:53:51.000754 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:51 crc kubenswrapper[4708]: E0227 16:53:51.005442 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:51Z is after 2026-02-23T05:33:13Z" node="crc" Feb 27 16:53:51 crc kubenswrapper[4708]: I0227 16:53:51.144309 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:51Z is after 2026-02-23T05:33:13Z Feb 27 16:53:51 crc kubenswrapper[4708]: W0227 16:53:51.263827 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:51Z is after 2026-02-23T05:33:13Z Feb 27 16:53:51 crc kubenswrapper[4708]: E0227 16:53:51.263965 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:51Z is after 2026-02-23T05:33:13Z" logger="UnhandledError" Feb 27 16:53:52 crc kubenswrapper[4708]: I0227 16:53:52.144267 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:52Z is after 2026-02-23T05:33:13Z Feb 27 16:53:52 crc kubenswrapper[4708]: E0227 16:53:52.296648 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:53:53 crc kubenswrapper[4708]: I0227 16:53:53.143637 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:53Z is after 2026-02-23T05:33:13Z Feb 27 16:53:53 crc kubenswrapper[4708]: I0227 16:53:53.637876 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:53 crc kubenswrapper[4708]: I0227 16:53:53.638070 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:53 crc kubenswrapper[4708]: I0227 16:53:53.639575 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:53 crc kubenswrapper[4708]: I0227 16:53:53.639641 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:53 crc kubenswrapper[4708]: I0227 16:53:53.639659 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:54 crc kubenswrapper[4708]: I0227 16:53:54.144612 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:54Z is after 2026-02-23T05:33:13Z Feb 27 16:53:55 crc kubenswrapper[4708]: I0227 16:53:55.144210 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:55Z is after 2026-02-23T05:33:13Z Feb 27 16:53:55 crc kubenswrapper[4708]: I0227 16:53:55.934086 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:53:55 crc kubenswrapper[4708]: I0227 16:53:55.934391 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:55 crc kubenswrapper[4708]: I0227 16:53:55.936087 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:55 crc kubenswrapper[4708]: I0227 16:53:55.936190 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:55 crc kubenswrapper[4708]: I0227 16:53:55.936218 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:56 crc kubenswrapper[4708]: I0227 16:53:56.143985 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:56Z is after 2026-02-23T05:33:13Z Feb 27 16:53:57 crc kubenswrapper[4708]: I0227 16:53:57.145918 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.554701 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e267cdd6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,LastTimestamp:2026-02-27 16:53:22.138033622 +0000 UTC m=+0.653831249,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.561355 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.567495 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.573897 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de5452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,LastTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.580077 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2eb82ca85 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.290797189 +0000 UTC m=+0.806594816,LastTimestamp:2026-02-27 16:53:22.290797189 +0000 UTC m=+0.806594816,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.587405 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5dd881c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.328974964 +0000 UTC m=+0.844772581,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.594680 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de0bab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.329007312 +0000 UTC m=+0.844804939,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.600901 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de5452\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de5452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,LastTimestamp:2026-02-27 16:53:22.329027971 +0000 UTC m=+0.844825588,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.607307 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5dd881c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.330580609 +0000 UTC m=+0.846378206,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.614713 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de0bab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.330604838 +0000 UTC m=+0.846402435,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.623654 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de5452\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de5452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,LastTimestamp:2026-02-27 16:53:22.330616808 +0000 UTC m=+0.846414405,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.630538 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5dd881c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.330660326 +0000 UTC m=+0.846457953,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.635320 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de0bab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.330678995 +0000 UTC m=+0.846476622,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.642091 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de5452\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de5452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,LastTimestamp:2026-02-27 16:53:22.330693364 +0000 UTC m=+0.846490981,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.648348 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5dd881c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.332452564 +0000 UTC m=+0.848250191,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.655327 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de0bab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.332481013 +0000 UTC m=+0.848278640,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.661598 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de5452\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de5452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,LastTimestamp:2026-02-27 16:53:22.332497502 +0000 UTC m=+0.848295129,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.667825 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5dd881c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.332576019 +0000 UTC m=+0.848373636,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.677891 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de0bab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.332633487 +0000 UTC m=+0.848431104,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.687793 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de5452\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de5452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,LastTimestamp:2026-02-27 16:53:22.332653436 +0000 UTC m=+0.848451053,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.697260 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5dd881c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.334642886 +0000 UTC m=+0.850440513,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.704676 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de0bab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.334673044 +0000 UTC m=+0.850470671,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.710906 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de5452\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de5452 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196132946 +0000 UTC m=+0.711930573,LastTimestamp:2026-02-27 16:53:22.334691884 +0000 UTC m=+0.850489511,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.717017 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5dd881c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5dd881c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196080668 +0000 UTC m=+0.711878295,LastTimestamp:2026-02-27 16:53:22.334719473 +0000 UTC m=+0.850517100,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.722944 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.189828b2e5de0bab\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.189828b2e5de0bab default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.196114347 +0000 UTC m=+0.711911974,LastTimestamp:2026-02-27 16:53:22.334750921 +0000 UTC m=+0.850548548,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.730527 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b30891757b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.778297723 +0000 UTC m=+1.294095350,LastTimestamp:2026-02-27 16:53:22.778297723 +0000 UTC m=+1.294095350,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.737374 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189828b3089ae264 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.778915428 +0000 UTC m=+1.294713045,LastTimestamp:2026-02-27 16:53:22.778915428 +0000 UTC m=+1.294713045,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.743551 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b308d9d1ee openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.783039982 +0000 UTC m=+1.298837599,LastTimestamp:2026-02-27 16:53:22.783039982 +0000 UTC m=+1.298837599,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.750492 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b309d10b1b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.799242011 +0000 UTC m=+1.315039628,LastTimestamp:2026-02-27 16:53:22.799242011 +0000 UTC m=+1.315039628,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.757070 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b30ac8f313 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:22.815488787 +0000 UTC m=+1.331286404,LastTimestamp:2026-02-27 16:53:22.815488787 +0000 UTC m=+1.331286404,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.765318 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b33811ac60 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.575229536 +0000 UTC m=+2.091027153,LastTimestamp:2026-02-27 16:53:23.575229536 +0000 UTC m=+2.091027153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.771461 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189828b33824ae41 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.576475201 +0000 UTC m=+2.092272818,LastTimestamp:2026-02-27 16:53:23.576475201 +0000 UTC m=+2.092272818,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.778803 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b33825ed99 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.576556953 +0000 UTC m=+2.092354580,LastTimestamp:2026-02-27 16:53:23.576556953 +0000 UTC m=+2.092354580,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.785469 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b33835755f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.577574751 +0000 UTC m=+2.093372368,LastTimestamp:2026-02-27 16:53:23.577574751 +0000 UTC m=+2.093372368,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.792057 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b338c706a9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.587114665 +0000 UTC m=+2.102912292,LastTimestamp:2026-02-27 16:53:23.587114665 +0000 UTC m=+2.102912292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.798350 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b339617ffa openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.597238266 +0000 UTC m=+2.113035883,LastTimestamp:2026-02-27 16:53:23.597238266 +0000 UTC m=+2.113035883,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.807905 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b33983819c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.599466908 +0000 UTC m=+2.115264535,LastTimestamp:2026-02-27 16:53:23.599466908 +0000 UTC m=+2.115264535,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.814279 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b339992a69 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.600886377 +0000 UTC m=+2.116683974,LastTimestamp:2026-02-27 16:53:23.600886377 +0000 UTC m=+2.116683974,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.820218 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b339db4291 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.605217937 +0000 UTC m=+2.121015534,LastTimestamp:2026-02-27 16:53:23.605217937 +0000 UTC m=+2.121015534,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.826521 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189828b339ead641 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.606238785 +0000 UTC m=+2.122036382,LastTimestamp:2026-02-27 16:53:23.606238785 +0000 UTC m=+2.122036382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.834443 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b339f61fe8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.606978536 +0000 UTC m=+2.122776153,LastTimestamp:2026-02-27 16:53:23.606978536 +0000 UTC m=+2.122776153,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.845443 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b34b84279b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.901499291 +0000 UTC m=+2.417296908,LastTimestamp:2026-02-27 16:53:23.901499291 +0000 UTC m=+2.417296908,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.851441 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b34c755dca openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.917307338 +0000 UTC m=+2.433104955,LastTimestamp:2026-02-27 16:53:23.917307338 +0000 UTC m=+2.433104955,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.857891 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b34c8d7486 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.918886022 +0000 UTC m=+2.434683649,LastTimestamp:2026-02-27 16:53:23.918886022 +0000 UTC m=+2.434683649,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.864058 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b35c3917ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.181792746 +0000 UTC m=+2.697590363,LastTimestamp:2026-02-27 16:53:24.181792746 +0000 UTC m=+2.697590363,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.870196 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b35d572a0d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.200540685 +0000 UTC m=+2.716338302,LastTimestamp:2026-02-27 16:53:24.200540685 +0000 UTC m=+2.716338302,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.876341 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b35d725779 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.202321785 +0000 UTC m=+2.718119382,LastTimestamp:2026-02-27 16:53:24.202321785 +0000 UTC m=+2.718119382,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.884297 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b360b16abb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.256787131 +0000 UTC m=+2.772584758,LastTimestamp:2026-02-27 16:53:24.256787131 +0000 UTC m=+2.772584758,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.890784 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189828b360eb0407 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.260561927 +0000 UTC m=+2.776359554,LastTimestamp:2026-02-27 16:53:24.260561927 +0000 UTC m=+2.776359554,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.897520 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b361081c3f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.262468671 +0000 UTC m=+2.778266298,LastTimestamp:2026-02-27 16:53:24.262468671 +0000 UTC m=+2.778266298,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.903890 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3615209c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.267313608 +0000 UTC m=+2.783111235,LastTimestamp:2026-02-27 16:53:24.267313608 +0000 UTC m=+2.783111235,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.910814 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b372830c59 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.555738201 +0000 UTC m=+3.071535838,LastTimestamp:2026-02-27 16:53:24.555738201 +0000 UTC m=+3.071535838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.917423 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b372e1af34 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.561940276 +0000 UTC m=+3.077737903,LastTimestamp:2026-02-27 16:53:24.561940276 +0000 UTC m=+3.077737903,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.923602 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b372e8f2fb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.562416379 +0000 UTC m=+3.078214006,LastTimestamp:2026-02-27 16:53:24.562416379 +0000 UTC m=+3.078214006,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.929882 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189828b372ea89c0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.562520512 +0000 UTC m=+3.078318139,LastTimestamp:2026-02-27 16:53:24.562520512 +0000 UTC m=+3.078318139,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.936070 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b372eacd9a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.562537882 +0000 UTC m=+3.078335509,LastTimestamp:2026-02-27 16:53:24.562537882 +0000 UTC m=+3.078335509,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.942147 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b375270acc openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.60004014 +0000 UTC m=+3.115837777,LastTimestamp:2026-02-27 16:53:24.60004014 +0000 UTC m=+3.115837777,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.948307 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b375703b9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.604836765 +0000 UTC m=+3.120634392,LastTimestamp:2026-02-27 16:53:24.604836765 +0000 UTC m=+3.120634392,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.954682 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3758ab08a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.606570634 +0000 UTC m=+3.122368251,LastTimestamp:2026-02-27 16:53:24.606570634 +0000 UTC m=+3.122368251,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.960473 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.189828b3759afa12 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.607638034 +0000 UTC m=+3.123435661,LastTimestamp:2026-02-27 16:53:24.607638034 +0000 UTC m=+3.123435661,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.966243 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b375a2ff87 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.608163719 +0000 UTC m=+3.123961346,LastTimestamp:2026-02-27 16:53:24.608163719 +0000 UTC m=+3.123961346,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.973146 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b375a40e97 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.608233111 +0000 UTC m=+3.124030728,LastTimestamp:2026-02-27 16:53:24.608233111 +0000 UTC m=+3.124030728,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.979253 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b375c58439 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.610425913 +0000 UTC m=+3.126223540,LastTimestamp:2026-02-27 16:53:24.610425913 +0000 UTC m=+3.126223540,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.986486 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b38532a099 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.869234841 +0000 UTC m=+3.385032438,LastTimestamp:2026-02-27 16:53:24.869234841 +0000 UTC m=+3.385032438,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:57 crc kubenswrapper[4708]: E0227 16:53:57.994639 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3853f486a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.870064234 +0000 UTC m=+3.385861811,LastTimestamp:2026-02-27 16:53:24.870064234 +0000 UTC m=+3.385861811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.001083 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b3864255f3 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.887041523 +0000 UTC m=+3.402839110,LastTimestamp:2026-02-27 16:53:24.887041523 +0000 UTC m=+3.402839110,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.005888 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.008713 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.008773 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b38654fb21 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.888263457 +0000 UTC m=+3.404061044,LastTimestamp:2026-02-27 16:53:24.888263457 +0000 UTC m=+3.404061044,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.008924 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.008990 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.009009 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.009109 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.010922 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b38686927d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.891513469 +0000 UTC m=+3.407311056,LastTimestamp:2026-02-27 16:53:24.891513469 +0000 UTC m=+3.407311056,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.011293 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.015443 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b38694a071 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:24.892434545 +0000 UTC m=+3.408232132,LastTimestamp:2026-02-27 16:53:24.892434545 +0000 UTC m=+3.408232132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.017086 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b39755568a openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.17349953 +0000 UTC m=+3.689297127,LastTimestamp:2026-02-27 16:53:25.17349953 +0000 UTC m=+3.689297127,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.023506 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3977c55d4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.176055252 +0000 UTC m=+3.691852849,LastTimestamp:2026-02-27 16:53:25.176055252 +0000 UTC m=+3.691852849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.029756 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.189828b39836006d openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:3dcd261975c3d6b9a6ad6367fd4facd3,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.188223085 +0000 UTC m=+3.704020682,LastTimestamp:2026-02-27 16:53:25.188223085 +0000 UTC m=+3.704020682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.036358 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b398a56cb5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.195525301 +0000 UTC m=+3.711322928,LastTimestamp:2026-02-27 16:53:25.195525301 +0000 UTC m=+3.711322928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.042761 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b398b87a13 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.196773907 +0000 UTC m=+3.712571514,LastTimestamp:2026-02-27 16:53:25.196773907 +0000 UTC m=+3.712571514,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.049587 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b39e744975 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.292968309 +0000 UTC m=+3.808765886,LastTimestamp:2026-02-27 16:53:25.292968309 +0000 UTC m=+3.808765886,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.056809 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3a7aafe2a openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.447548458 +0000 UTC m=+3.963346035,LastTimestamp:2026-02-27 16:53:25.447548458 +0000 UTC m=+3.963346035,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.062625 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3a8f9afea openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.469482986 +0000 UTC m=+3.985280573,LastTimestamp:2026-02-27 16:53:25.469482986 +0000 UTC m=+3.985280573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.086757 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3a9190d02 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.471538434 +0000 UTC m=+3.987336011,LastTimestamp:2026-02-27 16:53:25.471538434 +0000 UTC m=+3.987336011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.091820 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3ad53d92f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.542500655 +0000 UTC m=+4.058298272,LastTimestamp:2026-02-27 16:53:25.542500655 +0000 UTC m=+4.058298272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.108665 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3ae45dde5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.558361573 +0000 UTC m=+4.074159160,LastTimestamp:2026-02-27 16:53:25.558361573 +0000 UTC m=+4.074159160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.114121 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3b5b44742 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.683038018 +0000 UTC m=+4.198835595,LastTimestamp:2026-02-27 16:53:25.683038018 +0000 UTC m=+4.198835595,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.120466 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3b62babcb openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.690862539 +0000 UTC m=+4.206660126,LastTimestamp:2026-02-27 16:53:25.690862539 +0000 UTC m=+4.206660126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.126886 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3db06beb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:26.309199544 +0000 UTC m=+4.824997171,LastTimestamp:2026-02-27 16:53:26.309199544 +0000 UTC m=+4.824997171,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.134196 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3eab0856a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:26.571984234 +0000 UTC m=+5.087781851,LastTimestamp:2026-02-27 16:53:26.571984234 +0000 UTC m=+5.087781851,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.140489 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.140915 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3eb6e293a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:26.584412474 +0000 UTC m=+5.100210091,LastTimestamp:2026-02-27 16:53:26.584412474 +0000 UTC m=+5.100210091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.145004 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3eb885d26 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:26.586129702 +0000 UTC m=+5.101927319,LastTimestamp:2026-02-27 16:53:26.586129702 +0000 UTC m=+5.101927319,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.150684 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3faebf9db openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:26.844316123 +0000 UTC m=+5.360113740,LastTimestamp:2026-02-27 16:53:26.844316123 +0000 UTC m=+5.360113740,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.156495 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3fbe8b006 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:26.86087783 +0000 UTC m=+5.376675447,LastTimestamp:2026-02-27 16:53:26.86087783 +0000 UTC m=+5.376675447,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.160219 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b3fc03c418 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:26.86265244 +0000 UTC m=+5.378450057,LastTimestamp:2026-02-27 16:53:26.86265244 +0000 UTC m=+5.378450057,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.162768 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b40c36d4a9 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.134434473 +0000 UTC m=+5.650232090,LastTimestamp:2026-02-27 16:53:27.134434473 +0000 UTC m=+5.650232090,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.165781 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b40d164fbb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.149080507 +0000 UTC m=+5.664878134,LastTimestamp:2026-02-27 16:53:27.149080507 +0000 UTC m=+5.664878134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.167258 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b40d2b6bc0 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.150463936 +0000 UTC m=+5.666261553,LastTimestamp:2026-02-27 16:53:27.150463936 +0000 UTC m=+5.666261553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.171197 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b41d685d05 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.422893317 +0000 UTC m=+5.938690914,LastTimestamp:2026-02-27 16:53:27.422893317 +0000 UTC m=+5.938690914,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.176034 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b41e62a970 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.43929688 +0000 UTC m=+5.955094477,LastTimestamp:2026-02-27 16:53:27.43929688 +0000 UTC m=+5.955094477,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.179260 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b41e773cf8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.440645368 +0000 UTC m=+5.956442965,LastTimestamp:2026-02-27 16:53:27.440645368 +0000 UTC m=+5.956442965,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.184736 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b42e15d56f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.702697327 +0000 UTC m=+6.218494944,LastTimestamp:2026-02-27 16:53:27.702697327 +0000 UTC m=+6.218494944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.190623 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.189828b42f0a1e97 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:2139d3e2895fc6797b9c76a1b4c9886d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:27.718706839 +0000 UTC m=+6.234504466,LastTimestamp:2026-02-27 16:53:27.718706839 +0000 UTC m=+6.234504466,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.200494 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:53:58 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-controller-manager-crc.189828b47786921e openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Feb 27 16:53:58 crc kubenswrapper[4708]: body: Feb 27 16:53:58 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:28.93482243 +0000 UTC m=+7.450620047,LastTimestamp:2026-02-27 16:53:28.93482243 +0000 UTC m=+7.450620047,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:53:58 crc kubenswrapper[4708]: > Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.206421 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b4778850ed openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:28.934936813 +0000 UTC m=+7.450734430,LastTimestamp:2026-02-27 16:53:28.934936813 +0000 UTC m=+7.450734430,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.214819 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 16:53:58 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-apiserver-crc.189828b6373c6025 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Feb 27 16:53:58 crc kubenswrapper[4708]: body: Feb 27 16:53:58 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:36.446152741 +0000 UTC m=+14.961950358,LastTimestamp:2026-02-27 16:53:36.446152741 +0000 UTC m=+14.961950358,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:53:58 crc kubenswrapper[4708]: > Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.218613 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b6373d7f9d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:36.446226333 +0000 UTC m=+14.962023950,LastTimestamp:2026-02-27 16:53:36.446226333 +0000 UTC m=+14.962023950,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.225308 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189828b3a9190d02\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b3a9190d02 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:25.471538434 +0000 UTC m=+3.987336011,LastTimestamp:2026-02-27 16:53:37.372271695 +0000 UTC m=+15.888069312,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.229402 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 16:53:58 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-apiserver-crc.189828b67c21b4ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 27 16:53:58 crc kubenswrapper[4708]: body: [+]ping ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]log ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]etcd ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-filter ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-apiextensions-informers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-system-namespaces-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-kube-aggregator-informers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]autoregister-completion failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapi-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: livez check failed Feb 27 16:53:58 crc kubenswrapper[4708]: Feb 27 16:53:58 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:37.602032814 +0000 UTC m=+16.117830391,LastTimestamp:2026-02-27 16:53:37.602032814 +0000 UTC m=+16.117830391,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:53:58 crc kubenswrapper[4708]: > Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.233884 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b67c225f4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:37.602076495 +0000 UTC m=+16.117874082,LastTimestamp:2026-02-27 16:53:37.602076495 +0000 UTC m=+16.117874082,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.238492 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189828b67c21b4ae\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Feb 27 16:53:58 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-apiserver-crc.189828b67c21b4ae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Feb 27 16:53:58 crc kubenswrapper[4708]: body: [+]ping ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]log ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]etcd ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-api-request-count-filter ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-startkubeinformers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-apiserver-admission-initializer ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/generic-apiserver-start-informers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-config-consumer ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/priority-and-fairness-filter ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/storage-object-count-tracker-hook ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-apiextensions-informers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/start-apiextensions-controllers failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/crd-informer-synced failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-system-namespaces-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-cluster-authentication-info-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-legacy-token-tracking-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-service-ip-repair-controllers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/priority-and-fairness-config-producer failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/bootstrap-controller failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/start-kube-aggregator-informers ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-local-available-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-status-remote-available-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/apiservice-registration-controller failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-wait-for-first-sync ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]poststarthook/apiservice-discovery-controller failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/kube-apiserver-autoregistration ok Feb 27 16:53:58 crc kubenswrapper[4708]: [-]autoregister-completion failed: reason withheld Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapi-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: [+]poststarthook/apiservice-openapiv3-controller ok Feb 27 16:53:58 crc kubenswrapper[4708]: livez check failed Feb 27 16:53:58 crc kubenswrapper[4708]: Feb 27 16:53:58 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:37.602032814 +0000 UTC m=+16.117830391,LastTimestamp:2026-02-27 16:53:37.607499168 +0000 UTC m=+16.123296755,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:53:58 crc kubenswrapper[4708]: > Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.245905 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.189828b67c225f4f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.189828b67c225f4f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:f4b27818a5e8e43d0dc095d08835c792,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:37.602076495 +0000 UTC m=+16.117874082,LastTimestamp:2026-02-27 16:53:37.607518419 +0000 UTC m=+16.123316006,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.253217 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:53:58 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-controller-manager-crc.189828b6cb93daa2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 16:53:58 crc kubenswrapper[4708]: body: Feb 27 16:53:58 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:38.934913698 +0000 UTC m=+17.450711315,LastTimestamp:2026-02-27 16:53:38.934913698 +0000 UTC m=+17.450711315,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:53:58 crc kubenswrapper[4708]: > Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.259736 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b6cb94cc5c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:38.93497558 +0000 UTC m=+17.450773207,LastTimestamp:2026-02-27 16:53:38.93497558 +0000 UTC m=+17.450773207,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.268762 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b6cb93daa2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:53:58 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-controller-manager-crc.189828b6cb93daa2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 16:53:58 crc kubenswrapper[4708]: body: Feb 27 16:53:58 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:38.934913698 +0000 UTC m=+17.450711315,LastTimestamp:2026-02-27 16:53:48.934711076 +0000 UTC m=+27.450508703,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:53:58 crc kubenswrapper[4708]: > Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.275289 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b6cb94cc5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b6cb94cc5c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:38.93497558 +0000 UTC m=+17.450773207,LastTimestamp:2026-02-27 16:53:48.934816069 +0000 UTC m=+27.450613686,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.281287 4708 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b91fc6cb4b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Killing,Message:Container cluster-policy-controller failed startup probe, will be restarted,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:48.937472843 +0000 UTC m=+27.453270470,LastTimestamp:2026-02-27 16:53:48.937472843 +0000 UTC m=+27.453270470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.287487 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b339992a69\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b339992a69 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.600886377 +0000 UTC m=+2.116683974,LastTimestamp:2026-02-27 16:53:49.06410599 +0000 UTC m=+27.579903617,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.291657 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b34b84279b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b34b84279b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.901499291 +0000 UTC m=+2.417296908,LastTimestamp:2026-02-27 16:53:49.305056403 +0000 UTC m=+27.820854030,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.297310 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b34c755dca\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b34c755dca openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:23.917307338 +0000 UTC m=+2.433104955,LastTimestamp:2026-02-27 16:53:49.321436909 +0000 UTC m=+27.837234526,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.934370 4708 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:53:58 crc kubenswrapper[4708]: I0227 16:53:58.934456 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.942659 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b6cb93daa2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:53:58 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-controller-manager-crc.189828b6cb93daa2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 16:53:58 crc kubenswrapper[4708]: body: Feb 27 16:53:58 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:38.934913698 +0000 UTC m=+17.450711315,LastTimestamp:2026-02-27 16:53:58.934434461 +0000 UTC m=+37.450232078,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:53:58 crc kubenswrapper[4708]: > Feb 27 16:53:58 crc kubenswrapper[4708]: E0227 16:53:58.950036 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b6cb94cc5c\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.189828b6cb94cc5c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:38.93497558 +0000 UTC m=+17.450773207,LastTimestamp:2026-02-27 16:53:58.934494852 +0000 UTC m=+37.450292479,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:53:59 crc kubenswrapper[4708]: I0227 16:53:59.146902 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:00 crc kubenswrapper[4708]: I0227 16:54:00.145990 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:00 crc kubenswrapper[4708]: I0227 16:54:00.227467 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:00 crc kubenswrapper[4708]: I0227 16:54:00.229040 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:00 crc kubenswrapper[4708]: I0227 16:54:00.229083 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:00 crc kubenswrapper[4708]: I0227 16:54:00.229099 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:00 crc kubenswrapper[4708]: I0227 16:54:00.229760 4708 scope.go:117] "RemoveContainer" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.146912 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.472896 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.473604 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/1.log" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.476957 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d" exitCode=255 Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.477017 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d"} Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.477076 4708 scope.go:117] "RemoveContainer" containerID="1db06f254f382dee2e17220dcec918ab6aec79927f194c621922a094b7faebe0" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.477717 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.484567 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.484650 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.484672 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:01 crc kubenswrapper[4708]: I0227 16:54:01.485813 4708 scope.go:117] "RemoveContainer" containerID="9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d" Feb 27 16:54:01 crc kubenswrapper[4708]: E0227 16:54:01.486054 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:02 crc kubenswrapper[4708]: I0227 16:54:02.147363 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:02 crc kubenswrapper[4708]: E0227 16:54:02.297059 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:54:02 crc kubenswrapper[4708]: I0227 16:54:02.482683 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:54:02 crc kubenswrapper[4708]: I0227 16:54:02.832650 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Feb 27 16:54:02 crc kubenswrapper[4708]: I0227 16:54:02.853834 4708 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 16:54:03 crc kubenswrapper[4708]: I0227 16:54:03.146611 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:03 crc kubenswrapper[4708]: W0227 16:54:03.780413 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:03 crc kubenswrapper[4708]: E0227 16:54:03.780484 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 16:54:04 crc kubenswrapper[4708]: I0227 16:54:04.146461 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:04 crc kubenswrapper[4708]: I0227 16:54:04.403805 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:54:04 crc kubenswrapper[4708]: I0227 16:54:04.404069 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:04 crc kubenswrapper[4708]: I0227 16:54:04.405792 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:04 crc kubenswrapper[4708]: I0227 16:54:04.405841 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:04 crc kubenswrapper[4708]: I0227 16:54:04.405890 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:04 crc kubenswrapper[4708]: I0227 16:54:04.406636 4708 scope.go:117] "RemoveContainer" containerID="9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d" Feb 27 16:54:04 crc kubenswrapper[4708]: E0227 16:54:04.406948 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:05 crc kubenswrapper[4708]: I0227 16:54:05.012278 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:05 crc kubenswrapper[4708]: I0227 16:54:05.014193 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:05 crc kubenswrapper[4708]: I0227 16:54:05.014275 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:05 crc kubenswrapper[4708]: I0227 16:54:05.014302 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:05 crc kubenswrapper[4708]: I0227 16:54:05.014354 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:54:05 crc kubenswrapper[4708]: E0227 16:54:05.017000 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:54:05 crc kubenswrapper[4708]: E0227 16:54:05.017017 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:54:05 crc kubenswrapper[4708]: I0227 16:54:05.146810 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:05 crc kubenswrapper[4708]: W0227 16:54:05.265785 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 27 16:54:05 crc kubenswrapper[4708]: E0227 16:54:05.265898 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 16:54:06 crc kubenswrapper[4708]: I0227 16:54:06.146079 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:06 crc kubenswrapper[4708]: I0227 16:54:06.445360 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:54:06 crc kubenswrapper[4708]: I0227 16:54:06.445574 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:06 crc kubenswrapper[4708]: I0227 16:54:06.447228 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:06 crc kubenswrapper[4708]: I0227 16:54:06.447279 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:06 crc kubenswrapper[4708]: I0227 16:54:06.447314 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:06 crc kubenswrapper[4708]: I0227 16:54:06.448147 4708 scope.go:117] "RemoveContainer" containerID="9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d" Feb 27 16:54:06 crc kubenswrapper[4708]: E0227 16:54:06.448565 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:07 crc kubenswrapper[4708]: I0227 16:54:07.146313 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:08 crc kubenswrapper[4708]: I0227 16:54:08.147028 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:08 crc kubenswrapper[4708]: I0227 16:54:08.935029 4708 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:54:08 crc kubenswrapper[4708]: I0227 16:54:08.935137 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:54:08 crc kubenswrapper[4708]: E0227 16:54:08.941879 4708 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.189828b6cb93daa2\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Feb 27 16:54:08 crc kubenswrapper[4708]: &Event{ObjectMeta:{kube-controller-manager-crc.189828b6cb93daa2 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:f614b9022728cf315e60c057852e563e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Feb 27 16:54:08 crc kubenswrapper[4708]: body: Feb 27 16:54:08 crc kubenswrapper[4708]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:53:38.934913698 +0000 UTC m=+17.450711315,LastTimestamp:2026-02-27 16:54:08.93510314 +0000 UTC m=+47.450900757,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Feb 27 16:54:08 crc kubenswrapper[4708]: > Feb 27 16:54:09 crc kubenswrapper[4708]: I0227 16:54:09.145768 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:10 crc kubenswrapper[4708]: I0227 16:54:10.145938 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:11 crc kubenswrapper[4708]: I0227 16:54:11.146495 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:11 crc kubenswrapper[4708]: I0227 16:54:11.923763 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Feb 27 16:54:11 crc kubenswrapper[4708]: I0227 16:54:11.924059 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:11 crc kubenswrapper[4708]: I0227 16:54:11.925711 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:11 crc kubenswrapper[4708]: I0227 16:54:11.925793 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:11 crc kubenswrapper[4708]: I0227 16:54:11.925814 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:12 crc kubenswrapper[4708]: I0227 16:54:12.017657 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:12 crc kubenswrapper[4708]: I0227 16:54:12.019383 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:12 crc kubenswrapper[4708]: I0227 16:54:12.019443 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:12 crc kubenswrapper[4708]: I0227 16:54:12.019462 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:12 crc kubenswrapper[4708]: I0227 16:54:12.019495 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:54:12 crc kubenswrapper[4708]: E0227 16:54:12.023373 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:54:12 crc kubenswrapper[4708]: E0227 16:54:12.023447 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:54:12 crc kubenswrapper[4708]: I0227 16:54:12.143492 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:12 crc kubenswrapper[4708]: E0227 16:54:12.297350 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:54:13 crc kubenswrapper[4708]: I0227 16:54:13.145834 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:14 crc kubenswrapper[4708]: I0227 16:54:14.143740 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:14 crc kubenswrapper[4708]: W0227 16:54:14.892320 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "crc" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 27 16:54:14 crc kubenswrapper[4708]: E0227 16:54:14.892392 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 27 16:54:15 crc kubenswrapper[4708]: I0227 16:54:15.145686 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:15 crc kubenswrapper[4708]: I0227 16:54:15.942497 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:54:15 crc kubenswrapper[4708]: I0227 16:54:15.942760 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:15 crc kubenswrapper[4708]: I0227 16:54:15.944484 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:15 crc kubenswrapper[4708]: I0227 16:54:15.944824 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:15 crc kubenswrapper[4708]: I0227 16:54:15.945054 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:15 crc kubenswrapper[4708]: I0227 16:54:15.948968 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:54:16 crc kubenswrapper[4708]: I0227 16:54:16.146955 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:16 crc kubenswrapper[4708]: I0227 16:54:16.531101 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:16 crc kubenswrapper[4708]: I0227 16:54:16.532378 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:16 crc kubenswrapper[4708]: I0227 16:54:16.532443 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:16 crc kubenswrapper[4708]: I0227 16:54:16.532465 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:16 crc kubenswrapper[4708]: W0227 16:54:16.706721 4708 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 27 16:54:16 crc kubenswrapper[4708]: E0227 16:54:16.706788 4708 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 27 16:54:17 crc kubenswrapper[4708]: I0227 16:54:17.146821 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:18 crc kubenswrapper[4708]: I0227 16:54:18.143949 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.024393 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.026384 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.026456 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.026478 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.026517 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:54:19 crc kubenswrapper[4708]: E0227 16:54:19.033698 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:54:19 crc kubenswrapper[4708]: E0227 16:54:19.034333 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.146622 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.228005 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.229400 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.229454 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.229471 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:19 crc kubenswrapper[4708]: I0227 16:54:19.230305 4708 scope.go:117] "RemoveContainer" containerID="9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d" Feb 27 16:54:19 crc kubenswrapper[4708]: E0227 16:54:19.230570 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:20 crc kubenswrapper[4708]: I0227 16:54:20.146984 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:21 crc kubenswrapper[4708]: I0227 16:54:21.148597 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:22 crc kubenswrapper[4708]: I0227 16:54:22.146412 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:22 crc kubenswrapper[4708]: E0227 16:54:22.298072 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:54:23 crc kubenswrapper[4708]: I0227 16:54:23.146451 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:24 crc kubenswrapper[4708]: I0227 16:54:24.164633 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:25 crc kubenswrapper[4708]: I0227 16:54:25.145068 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:26 crc kubenswrapper[4708]: I0227 16:54:26.034271 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:26 crc kubenswrapper[4708]: I0227 16:54:26.036594 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:26 crc kubenswrapper[4708]: I0227 16:54:26.036660 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:26 crc kubenswrapper[4708]: I0227 16:54:26.036684 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:26 crc kubenswrapper[4708]: I0227 16:54:26.036729 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:54:26 crc kubenswrapper[4708]: E0227 16:54:26.043561 4708 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Feb 27 16:54:26 crc kubenswrapper[4708]: E0227 16:54:26.043679 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Feb 27 16:54:26 crc kubenswrapper[4708]: I0227 16:54:26.146982 4708 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Feb 27 16:54:27 crc kubenswrapper[4708]: I0227 16:54:27.113905 4708 csr.go:261] certificate signing request csr-bxwck is approved, waiting to be issued Feb 27 16:54:27 crc kubenswrapper[4708]: I0227 16:54:27.124895 4708 csr.go:257] certificate signing request csr-bxwck is issued Feb 27 16:54:27 crc kubenswrapper[4708]: I0227 16:54:27.194895 4708 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Feb 27 16:54:27 crc kubenswrapper[4708]: I0227 16:54:27.978043 4708 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 27 16:54:28 crc kubenswrapper[4708]: I0227 16:54:28.126581 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-25 06:20:15.584268245 +0000 UTC Feb 27 16:54:28 crc kubenswrapper[4708]: I0227 16:54:28.126641 4708 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6493h25m47.457633524s for next certificate rotation Feb 27 16:54:32 crc kubenswrapper[4708]: E0227 16:54:32.299150 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.043799 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.045348 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.045402 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.045420 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.045569 4708 kubelet_node_status.go:76] "Attempting to register node" node="crc" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.058312 4708 kubelet_node_status.go:115] "Node was previously registered" node="crc" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.058609 4708 kubelet_node_status.go:79] "Successfully registered node" node="crc" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.058645 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.063513 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.063572 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.063590 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.063618 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.063636 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:33Z","lastTransitionTime":"2026-02-27T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.087686 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.096913 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.096934 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.096944 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.096960 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.096971 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:33Z","lastTransitionTime":"2026-02-27T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.108016 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.120603 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.120657 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.120674 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.120701 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.120718 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:33Z","lastTransitionTime":"2026-02-27T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.137331 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.148632 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.148736 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.148794 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.148816 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.148834 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:33Z","lastTransitionTime":"2026-02-27T16:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.165725 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.166003 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.166044 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.228224 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.229609 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.229725 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.229757 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.231249 4708 scope.go:117] "RemoveContainer" containerID="9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.266511 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.367517 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.468687 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.569822 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.581495 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.584219 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e"} Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.584381 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.587838 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.588016 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:33 crc kubenswrapper[4708]: I0227 16:54:33.588038 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.670946 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.772219 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.873306 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:33 crc kubenswrapper[4708]: E0227 16:54:33.973995 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.074268 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.174669 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.274946 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.376092 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.404724 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.474920 4708 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.476771 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.577364 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.588668 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.589365 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/2.log" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.591952 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" exitCode=255 Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.592010 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e"} Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.592033 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.592076 4708 scope.go:117] "RemoveContainer" containerID="9399c62bcbe23961cb5bf3b2cfb2dd66b41ad3178cc13793a0228161d0ce8e7d" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.592830 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.592926 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.592945 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:34 crc kubenswrapper[4708]: I0227 16:54:34.593926 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.594204 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.678308 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.779098 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.879669 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:34 crc kubenswrapper[4708]: E0227 16:54:34.980523 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.081221 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.181979 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.283115 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.383673 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.484263 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.585027 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: I0227 16:54:35.600840 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:54:35 crc kubenswrapper[4708]: I0227 16:54:35.604114 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:35 crc kubenswrapper[4708]: I0227 16:54:35.605340 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:35 crc kubenswrapper[4708]: I0227 16:54:35.605386 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:35 crc kubenswrapper[4708]: I0227 16:54:35.605398 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:35 crc kubenswrapper[4708]: I0227 16:54:35.606939 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.607629 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.685611 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.786590 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.887348 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:35 crc kubenswrapper[4708]: E0227 16:54:35.987724 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.088065 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.188397 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.288891 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.389728 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: I0227 16:54:36.446201 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.490768 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.591485 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: I0227 16:54:36.607675 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:36 crc kubenswrapper[4708]: I0227 16:54:36.609530 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:36 crc kubenswrapper[4708]: I0227 16:54:36.609614 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:36 crc kubenswrapper[4708]: I0227 16:54:36.609635 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:36 crc kubenswrapper[4708]: I0227 16:54:36.610733 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.611086 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.691793 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.792267 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.892447 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:36 crc kubenswrapper[4708]: E0227 16:54:36.992819 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.093043 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.194032 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.294813 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.395498 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.496068 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: I0227 16:54:37.520766 4708 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.596568 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.697484 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.797925 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.898812 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:37 crc kubenswrapper[4708]: E0227 16:54:37.999875 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.100070 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.201034 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.302139 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.403046 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.504163 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.604879 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.705785 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.806376 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:38 crc kubenswrapper[4708]: E0227 16:54:38.907246 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.007640 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.107759 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.208476 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.309264 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.410238 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.511287 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.612447 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.713429 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.813748 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:39 crc kubenswrapper[4708]: E0227 16:54:39.914387 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.014811 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.114936 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.216028 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.316443 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.417509 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.518645 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.618900 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.720102 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.821191 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:40 crc kubenswrapper[4708]: E0227 16:54:40.922319 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.023319 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.123718 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.224716 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: I0227 16:54:41.228242 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:41 crc kubenswrapper[4708]: I0227 16:54:41.230015 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:41 crc kubenswrapper[4708]: I0227 16:54:41.230053 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:41 crc kubenswrapper[4708]: I0227 16:54:41.230062 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.325341 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.425497 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.525624 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.626531 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.727360 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.828442 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:41 crc kubenswrapper[4708]: E0227 16:54:41.928765 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.029300 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.129944 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.230960 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.301063 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.331372 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.432438 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.533303 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.633616 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.733969 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.834421 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:42 crc kubenswrapper[4708]: E0227 16:54:42.935533 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.035768 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.136735 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.230473 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.236698 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.236763 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.236782 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.236807 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.236826 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:43Z","lastTransitionTime":"2026-02-27T16:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.252964 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.258035 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.258066 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.258077 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.258094 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.258107 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:43Z","lastTransitionTime":"2026-02-27T16:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.275087 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.279796 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.279874 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.279892 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.279913 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.279930 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:43Z","lastTransitionTime":"2026-02-27T16:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.295611 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.300998 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.301083 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.301142 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.301177 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:43 crc kubenswrapper[4708]: I0227 16:54:43.301199 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:43Z","lastTransitionTime":"2026-02-27T16:54:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.317163 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.317335 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.317370 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.417799 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.517934 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.618908 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.719528 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.819630 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:43 crc kubenswrapper[4708]: E0227 16:54:43.920769 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.020910 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.121832 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.222298 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.323194 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.423684 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.524158 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.625317 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.725762 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.826239 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:44 crc kubenswrapper[4708]: E0227 16:54:44.927359 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.028153 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.129173 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.230208 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.330799 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.431209 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.532332 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.632464 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.733413 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.834652 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:45 crc kubenswrapper[4708]: E0227 16:54:45.935051 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.035196 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.135561 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.236118 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.336396 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.437283 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.538290 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.638406 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.739503 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.839629 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:46 crc kubenswrapper[4708]: E0227 16:54:46.940763 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.041132 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.141621 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.242769 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.343035 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.443147 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.543455 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.644501 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.745490 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.846596 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:47 crc kubenswrapper[4708]: E0227 16:54:47.947316 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.047643 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.148774 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.249143 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.349838 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.450969 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.551777 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.652180 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.752485 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.853240 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:48 crc kubenswrapper[4708]: E0227 16:54:48.953582 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.054200 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.154753 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: I0227 16:54:49.228044 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:49 crc kubenswrapper[4708]: I0227 16:54:49.229572 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:49 crc kubenswrapper[4708]: I0227 16:54:49.229628 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:49 crc kubenswrapper[4708]: I0227 16:54:49.229651 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:49 crc kubenswrapper[4708]: I0227 16:54:49.230589 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.230989 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.255343 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.355760 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.456033 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.557029 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.657316 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.758233 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.859174 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:49 crc kubenswrapper[4708]: E0227 16:54:49.960234 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.060956 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.161739 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.262310 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.363438 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.463749 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.564282 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.665207 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.765759 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.866812 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:50 crc kubenswrapper[4708]: E0227 16:54:50.967986 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.068663 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.169711 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: I0227 16:54:51.208376 4708 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.270246 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.370371 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.470752 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.571605 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.672066 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.773212 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.874344 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:51 crc kubenswrapper[4708]: E0227 16:54:51.975485 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.076094 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.176706 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.277574 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.301893 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.378271 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.479048 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.579338 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.679705 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.780347 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.881096 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:52 crc kubenswrapper[4708]: E0227 16:54:52.982119 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.082414 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.182937 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.283418 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.384468 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.485352 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.585775 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.650313 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.654616 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.654676 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.654701 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.654734 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.654761 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:53Z","lastTransitionTime":"2026-02-27T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.669912 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.674192 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.674284 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.674313 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.674343 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.674368 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:53Z","lastTransitionTime":"2026-02-27T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.688172 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.692669 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.692738 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.692763 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.692790 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.692817 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:53Z","lastTransitionTime":"2026-02-27T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.707326 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.711562 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.711591 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.711601 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.711615 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:54:53 crc kubenswrapper[4708]: I0227 16:54:53.711625 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:54:53Z","lastTransitionTime":"2026-02-27T16:54:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.722087 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.722309 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.722353 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.822796 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:53 crc kubenswrapper[4708]: E0227 16:54:53.923732 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.024266 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.125276 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.226442 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.326915 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.427830 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.528703 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.629333 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.730195 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.830315 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:54 crc kubenswrapper[4708]: E0227 16:54:54.930940 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.031828 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.132265 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.232656 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.333698 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.434588 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.534744 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.635589 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.736209 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.837266 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:55 crc kubenswrapper[4708]: E0227 16:54:55.937684 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.038183 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.139076 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.239301 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.340007 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.440995 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.541599 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.641922 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.742628 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.843441 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:56 crc kubenswrapper[4708]: E0227 16:54:56.944414 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.045597 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.146428 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.247058 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.347904 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.448883 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.549614 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.650465 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.750936 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.851636 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:57 crc kubenswrapper[4708]: E0227 16:54:57.952677 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.053009 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.153077 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.253660 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.354813 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.455470 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.556283 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.657449 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.759212 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.860018 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:58 crc kubenswrapper[4708]: E0227 16:54:58.960879 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.062040 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.162401 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: I0227 16:54:59.227724 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:54:59 crc kubenswrapper[4708]: I0227 16:54:59.229584 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:54:59 crc kubenswrapper[4708]: I0227 16:54:59.229665 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:54:59 crc kubenswrapper[4708]: I0227 16:54:59.229682 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.263289 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.363745 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.464497 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.565117 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.665402 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.766293 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.867198 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:54:59 crc kubenswrapper[4708]: E0227 16:54:59.967400 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.068431 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.168547 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: I0227 16:55:00.227639 4708 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Feb 27 16:55:00 crc kubenswrapper[4708]: I0227 16:55:00.229204 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:00 crc kubenswrapper[4708]: I0227 16:55:00.229262 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:00 crc kubenswrapper[4708]: I0227 16:55:00.229287 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:00 crc kubenswrapper[4708]: I0227 16:55:00.230269 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.230557 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.269029 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.370003 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.470452 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.571570 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.671741 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.772499 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.873266 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:00 crc kubenswrapper[4708]: E0227 16:55:00.973834 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.074939 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.175970 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.276051 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.377129 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.478076 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.579208 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.679303 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.780243 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.881042 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:01 crc kubenswrapper[4708]: E0227 16:55:01.982118 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.083197 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.184153 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.284722 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.302952 4708 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.385798 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.486672 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.587931 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.689088 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.790188 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.891183 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:02 crc kubenswrapper[4708]: E0227 16:55:02.991410 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.091582 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.192152 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.292694 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.393832 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.494339 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.594942 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.696019 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.797193 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.897970 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:03 crc kubenswrapper[4708]: E0227 16:55:03.998917 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.090645 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.098427 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.098497 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.098523 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.098654 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.098722 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:04Z","lastTransitionTime":"2026-02-27T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.116212 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.121034 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.121101 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.121121 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.121144 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.121161 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:04Z","lastTransitionTime":"2026-02-27T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.135759 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.142362 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.142420 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.142443 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.142470 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.142492 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:04Z","lastTransitionTime":"2026-02-27T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.156799 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.161329 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.161389 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.161414 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.161442 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:04 crc kubenswrapper[4708]: I0227 16:55:04.161463 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:04Z","lastTransitionTime":"2026-02-27T16:55:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.175678 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.175978 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.176029 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.277041 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.377820 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.478667 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.579757 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.680807 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.781585 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.882662 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:04 crc kubenswrapper[4708]: E0227 16:55:04.983411 4708 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.033843 4708 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.086181 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.086231 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.086249 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.086272 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.086292 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.188897 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.188951 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.188969 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.188995 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.189015 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.201689 4708 apiserver.go:52] "Watching apiserver" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.209386 4708 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.209972 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-hz8lb","openshift-dns/node-resolver-9s7tp","openshift-multus/multus-p6n6j","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-machine-config-operator/machine-config-daemon-kvxg2","openshift-multus/multus-additional-cni-plugins-bp77l","openshift-multus/network-metrics-daemon-4t52p","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-operator/iptables-alerter-4ln5h","openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz","openshift-ovn-kubernetes/ovnkube-node-l82mg","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-node-identity/network-node-identity-vrzqb"] Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.210437 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.210720 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.210792 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.210895 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.211001 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.211378 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.211537 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.211649 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.211792 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.211799 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.211890 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.211923 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.212121 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.212253 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.211678 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.212572 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.212763 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.214440 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.214479 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.214979 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.223530 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.225149 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227044 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227081 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227054 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227345 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227559 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227572 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227721 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.227741 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.228030 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.228104 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.228035 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.228225 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.228382 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.228435 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.231089 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.231328 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.231517 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.231679 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.231820 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.232132 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.232254 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.232374 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.232487 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.232541 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.231695 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.232797 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.233035 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.233395 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.233587 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.234929 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.235317 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.235384 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.236183 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.244525 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cnibin\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.244586 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.244761 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.244829 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-systemd-units\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.244898 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-script-lib\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.244935 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-system-cni-dir\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.244975 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.244938 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245038 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5763b282-e978-499f-a8e2-5b7ed78d691e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.245226 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:05.745198225 +0000 UTC m=+104.260995842 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245369 4708 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245564 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245601 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245633 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-log-socket\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245667 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ftv\" (UniqueName: \"kubernetes.io/projected/ca723997-3668-4afc-afdf-64ae7404b8ba-kube-api-access-g6ftv\") pod \"node-resolver-9s7tp\" (UID: \"ca723997-3668-4afc-afdf-64ae7404b8ba\") " pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245699 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245730 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245763 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-ovn-kubernetes\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245796 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245829 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5763b282-e978-499f-a8e2-5b7ed78d691e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245887 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-os-release\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245917 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca723997-3668-4afc-afdf-64ae7404b8ba-hosts-file\") pod \"node-resolver-9s7tp\" (UID: \"ca723997-3668-4afc-afdf-64ae7404b8ba\") " pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.245991 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv98f\" (UniqueName: \"kubernetes.io/projected/6de7f119-b85b-44ae-a478-443eca219825-kube-api-access-xv98f\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246074 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246143 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-slash\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246178 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-ovn\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246255 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246327 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-netns\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246365 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-systemd\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246434 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5763b282-e978-499f-a8e2-5b7ed78d691e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246466 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6de7f119-b85b-44ae-a478-443eca219825-serviceca\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246220 4708 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246538 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246607 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-env-overrides\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246646 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246727 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246761 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246824 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-config\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246909 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc6tg\" (UniqueName: \"kubernetes.io/projected/7efaba13-6a00-4f12-9e83-5a66a2246554-kube-api-access-dc6tg\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246982 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-bin\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.246984 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247016 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7efaba13-6a00-4f12-9e83-5a66a2246554-ovn-node-metrics-cert\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247089 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-node-log\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247127 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247257 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-kubelet\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247297 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-var-lib-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247380 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cni-binary-copy\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247692 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4qbh\" (UniqueName: \"kubernetes.io/projected/5763b282-e978-499f-a8e2-5b7ed78d691e-kube-api-access-c4qbh\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247745 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247785 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-netd\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247822 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.247880 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6de7f119-b85b-44ae-a478-443eca219825-host\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.248057 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-etc-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.248929 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.249028 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k2mvm\" (UniqueName: \"kubernetes.io/projected/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-kube-api-access-k2mvm\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.249821 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.257642 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.258582 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.271128 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.272066 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.272108 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.272129 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.272221 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:05.772190662 +0000 UTC m=+104.287988289 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.275408 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.275691 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.275724 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.275744 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.275809 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:05.775786546 +0000 UTC m=+104.291584173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.279174 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.284071 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.284133 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.292322 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.292374 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.292393 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.292414 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.292426 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.292559 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.299007 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: W0227 16:55:05.310517 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-e8d2ba2b01bbce8e52fe39b558b0e4797b9cdf71c59f4131daff236a78ef982c WatchSource:0}: Error finding container e8d2ba2b01bbce8e52fe39b558b0e4797b9cdf71c59f4131daff236a78ef982c: Status 404 returned error can't find the container with id e8d2ba2b01bbce8e52fe39b558b0e4797b9cdf71c59f4131daff236a78ef982c Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.317104 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.338721 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350281 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350364 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350417 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350467 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350516 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350572 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350619 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351399 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.350954 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351471 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351598 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351651 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351252 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351294 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351437 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351703 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351811 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351881 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351934 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351961 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352010 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352034 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352060 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352129 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.351992 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352180 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352183 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352205 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352378 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352450 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352512 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352567 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352617 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352667 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352720 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352727 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352771 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352832 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352926 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352936 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352963 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.352986 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353040 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353093 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353149 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353198 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353250 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353299 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353350 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353410 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353501 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353556 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353609 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353663 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353711 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353739 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353764 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353805 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353823 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353915 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.353974 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354031 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354108 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354153 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354193 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354222 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354229 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354321 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354347 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354357 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354373 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354448 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354485 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354522 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354567 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354583 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354712 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354763 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354801 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354839 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354880 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.354960 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355049 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355086 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355122 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355159 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355194 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355230 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355246 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355265 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355299 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355429 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355471 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355494 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355510 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355543 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355577 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355612 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355651 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355689 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355724 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355761 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355792 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355826 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355923 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355924 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355945 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355953 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.355982 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356011 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356036 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356060 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356087 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356111 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356135 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356162 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356187 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356212 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356240 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356378 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356407 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356433 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356458 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356483 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356511 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356537 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356564 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356594 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356622 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356652 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356678 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356706 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356733 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356756 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356780 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356803 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356826 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356872 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356896 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356920 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356942 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356966 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356990 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357015 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357068 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357093 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357121 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357175 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357200 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357250 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357277 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357263 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357534 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358174 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358242 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358269 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.359931 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360068 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360095 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360152 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360178 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360222 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360245 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360267 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360321 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360343 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360365 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360410 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360434 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360477 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360503 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360526 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360581 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360608 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360651 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360678 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360700 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362977 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363062 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363123 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363180 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363232 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363431 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363498 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363568 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363625 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363683 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363737 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363793 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363903 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363963 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364023 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364080 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364138 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364194 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364247 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364305 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364359 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364491 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364556 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364598 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364639 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364678 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364718 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364763 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364802 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364841 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364930 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364977 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365106 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365150 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365192 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365238 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365277 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365319 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365360 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365950 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-systemd-units\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.366014 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-script-lib\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.366058 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-system-cni-dir\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356123 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356935 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.356783 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357687 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.357710 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358074 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358122 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358217 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358527 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358574 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358695 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358737 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358900 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.358930 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.359045 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.366279 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.359249 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.359427 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.359436 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.359444 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.359825 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360114 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360160 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360266 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360379 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.360682 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.361442 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.361998 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362030 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362041 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.361719 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362316 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362372 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362388 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362597 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362631 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362888 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.362844 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363008 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363209 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363223 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363310 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363361 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363395 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363538 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363419 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363718 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363732 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363903 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363957 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363967 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.363923 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364206 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364257 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364758 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.364872 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365191 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365209 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365220 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365303 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.365753 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.366121 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.366141 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.366619 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.367680 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.367935 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.367982 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.368035 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.368043 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.368314 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.366141 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtx74\" (UniqueName: \"kubernetes.io/projected/2c5353a5-c388-4046-bb29-8e73352588c2-kube-api-access-vtx74\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369187 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5763b282-e978-499f-a8e2-5b7ed78d691e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369240 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-system-cni-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369279 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369315 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-log-socket\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369398 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ftv\" (UniqueName: \"kubernetes.io/projected/ca723997-3668-4afc-afdf-64ae7404b8ba-kube-api-access-g6ftv\") pod \"node-resolver-9s7tp\" (UID: \"ca723997-3668-4afc-afdf-64ae7404b8ba\") " pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369439 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-rootfs\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369493 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-ovn-kubernetes\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369531 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369589 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5763b282-e978-499f-a8e2-5b7ed78d691e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369626 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369662 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369701 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369736 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c5353a5-c388-4046-bb29-8e73352588c2-cni-binary-copy\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369809 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-etc-kubernetes\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369865 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-proxy-tls\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369915 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-slash\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369949 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-ovn\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369983 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-os-release\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.370021 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca723997-3668-4afc-afdf-64ae7404b8ba-hosts-file\") pod \"node-resolver-9s7tp\" (UID: \"ca723997-3668-4afc-afdf-64ae7404b8ba\") " pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.370997 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv98f\" (UniqueName: \"kubernetes.io/projected/6de7f119-b85b-44ae-a478-443eca219825-kube-api-access-xv98f\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.371076 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-netns\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.371134 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-systemd\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.371195 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5763b282-e978-499f-a8e2-5b7ed78d691e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.371248 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6de7f119-b85b-44ae-a478-443eca219825-serviceca\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.371345 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-hostroot\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372581 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-multus-certs\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372639 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-mcd-auth-proxy-config\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372705 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-env-overrides\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372765 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372814 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-config\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372895 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dc6tg\" (UniqueName: \"kubernetes.io/projected/7efaba13-6a00-4f12-9e83-5a66a2246554-kube-api-access-dc6tg\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372949 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wgq8g\" (UniqueName: \"kubernetes.io/projected/79b58c0b-8d12-4391-999c-9689f9488f46-kube-api-access-wgq8g\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373000 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-bin\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373049 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7efaba13-6a00-4f12-9e83-5a66a2246554-ovn-node-metrics-cert\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373093 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-node-log\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373154 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-k8s-cni-cncf-io\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373193 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-kubelet\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373227 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-var-lib-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373266 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cni-binary-copy\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373303 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4qbh\" (UniqueName: \"kubernetes.io/projected/5763b282-e978-499f-a8e2-5b7ed78d691e-kube-api-access-c4qbh\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373285 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373563 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-socket-dir-parent\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373612 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-cni-bin\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373626 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373640 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zf88c\" (UniqueName: \"kubernetes.io/projected/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-kube-api-access-zf88c\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373890 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-tuning-conf-dir\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.370587 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/ca723997-3668-4afc-afdf-64ae7404b8ba-hosts-file\") pod \"node-resolver-9s7tp\" (UID: \"ca723997-3668-4afc-afdf-64ae7404b8ba\") " pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.368408 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.368480 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.368575 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369153 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369366 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.369590 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.370207 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.370314 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.370874 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.371899 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.371899 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372274 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372464 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372699 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372951 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.372975 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373024 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373046 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373116 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.373957 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.374243 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.374985 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5763b282-e978-499f-a8e2-5b7ed78d691e-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.375054 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.375590 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-env-overrides\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.375677 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.375717 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.376382 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.376451 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-systemd\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.376418 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-netns\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.376804 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-bin\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.377475 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.377547 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.378902 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.379457 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6de7f119-b85b-44ae-a478-443eca219825-serviceca\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.379756 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.379781 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-node-log\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.379831 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-kubelet\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.379935 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-var-lib-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.380151 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-systemd-units\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.380926 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.380975 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.381014 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.381094 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-config\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.381266 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.381704 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.381896 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.382166 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.383178 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.383266 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-system-cni-dir\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.383279 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.383330 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.383860 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.383902 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.384001 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.384449 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385012 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7efaba13-6a00-4f12-9e83-5a66a2246554-ovn-node-metrics-cert\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385112 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-netd\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385133 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385354 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-cnibin\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385409 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-os-release\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385481 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2c5353a5-c388-4046-bb29-8e73352588c2-multus-daemon-config\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385577 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385635 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-netns\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385671 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-cni-multus\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385706 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-conf-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385582 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385669 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cni-binary-copy\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385773 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385783 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.385965 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-etc-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.386053 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k2mvm\" (UniqueName: \"kubernetes.io/projected/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-kube-api-access-k2mvm\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.386109 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6de7f119-b85b-44ae-a478-443eca219825-host\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.386171 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-kubelet\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.386232 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cnibin\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.386293 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.386376 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-cni-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.392225 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-ovn-kubernetes\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.392343 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.392625 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-slash\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.392699 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-ovn\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.392787 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-os-release\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.392914 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-netd\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.393025 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.393081 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-etc-openvswitch\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.393248 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:05.893191957 +0000 UTC m=+104.408989554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.393468 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6de7f119-b85b-44ae-a478-443eca219825-host\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.393508 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-log-socket\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.393537 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-cnibin\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.393827 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.394706 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.394915 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.395803 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.395968 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5763b282-e978-499f-a8e2-5b7ed78d691e-env-overrides\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.396250 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.397010 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.398174 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399010 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399116 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399157 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399171 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399261 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399267 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399573 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.399568 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.400388 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.400541 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.401888 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:55:05.901830405 +0000 UTC m=+104.417628032 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.401924 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.401973 4708 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402129 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402160 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402201 4708 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402231 4708 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402266 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402296 4708 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402330 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402365 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402392 4708 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402424 4708 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402451 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402483 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402509 4708 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402542 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402573 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402607 4708 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402637 4708 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402672 4708 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402704 4708 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402730 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402761 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402786 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402818 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402869 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402927 4708 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402953 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.402985 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403011 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403045 4708 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403071 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403095 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403126 4708 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403151 4708 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403190 4708 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403216 4708 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403249 4708 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403273 4708 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403305 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403330 4708 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403360 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403384 4708 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403407 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403438 4708 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403463 4708 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403496 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403532 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403532 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403558 4708 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403578 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403588 4708 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403596 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403613 4708 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403634 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403644 4708 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403669 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403662 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403696 4708 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403728 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403771 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403802 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403828 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.403978 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404127 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404183 4708 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404205 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404231 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404251 4708 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404277 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404298 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404320 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404350 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404376 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404396 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404416 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404441 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404460 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404480 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404498 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404522 4708 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404541 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404562 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404581 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404605 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404624 4708 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404642 4708 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404667 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404688 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404710 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404730 4708 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404756 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404775 4708 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404795 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404816 4708 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404872 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.404942 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dc6tg\" (UniqueName: \"kubernetes.io/projected/7efaba13-6a00-4f12-9e83-5a66a2246554-kube-api-access-dc6tg\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.405459 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.406594 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-script-lib\") pod \"ovnkube-node-l82mg\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.406986 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.407705 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.411386 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.411426 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.411616 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.412134 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.412552 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.413109 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.413185 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.413234 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.414113 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.414800 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5763b282-e978-499f-a8e2-5b7ed78d691e-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.418120 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.418311 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.418717 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.418747 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.418989 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.420593 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ftv\" (UniqueName: \"kubernetes.io/projected/ca723997-3668-4afc-afdf-64ae7404b8ba-kube-api-access-g6ftv\") pod \"node-resolver-9s7tp\" (UID: \"ca723997-3668-4afc-afdf-64ae7404b8ba\") " pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.420976 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.421201 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.421292 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.421337 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.421561 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.421566 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.421707 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.421804 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.422218 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.422246 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.422352 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.422477 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.422730 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.423334 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.424075 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.426796 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k2mvm\" (UniqueName: \"kubernetes.io/projected/9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b-kube-api-access-k2mvm\") pod \"multus-additional-cni-plugins-bp77l\" (UID: \"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\") " pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427048 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427098 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427329 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427460 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427530 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-bp77l" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427707 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427787 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.427940 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.428009 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.428689 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.431261 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.431521 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4qbh\" (UniqueName: \"kubernetes.io/projected/5763b282-e978-499f-a8e2-5b7ed78d691e-kube-api-access-c4qbh\") pod \"ovnkube-control-plane-749d76644c-blgrz\" (UID: \"5763b282-e978-499f-a8e2-5b7ed78d691e\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.432648 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv98f\" (UniqueName: \"kubernetes.io/projected/6de7f119-b85b-44ae-a478-443eca219825-kube-api-access-xv98f\") pod \"node-ca-hz8lb\" (UID: \"6de7f119-b85b-44ae-a478-443eca219825\") " pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.443513 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.448800 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.450478 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.453245 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.458837 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.465170 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.470606 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.475613 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.488206 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.489570 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.494818 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:55:05 crc kubenswrapper[4708]: W0227 16:55:05.494927 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-abd3dd49df3569599d00aa9818e48f852ebecabbeca1e57388f19cf06f313bd5 WatchSource:0}: Error finding container abd3dd49df3569599d00aa9818e48f852ebecabbeca1e57388f19cf06f313bd5: Status 404 returned error can't find the container with id abd3dd49df3569599d00aa9818e48f852ebecabbeca1e57388f19cf06f313bd5 Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505587 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtx74\" (UniqueName: \"kubernetes.io/projected/2c5353a5-c388-4046-bb29-8e73352588c2-kube-api-access-vtx74\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505675 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-system-cni-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505705 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-rootfs\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505766 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505810 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c5353a5-c388-4046-bb29-8e73352588c2-cni-binary-copy\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505815 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-system-cni-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505833 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-etc-kubernetes\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505877 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-proxy-tls\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505908 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-hostroot\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505935 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-multus-certs\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505963 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-mcd-auth-proxy-config\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506012 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wgq8g\" (UniqueName: \"kubernetes.io/projected/79b58c0b-8d12-4391-999c-9689f9488f46-kube-api-access-wgq8g\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506030 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-hostroot\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506056 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-k8s-cni-cncf-io\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506084 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-etc-kubernetes\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506101 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zf88c\" (UniqueName: \"kubernetes.io/projected/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-kube-api-access-zf88c\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506161 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-socket-dir-parent\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506186 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-cni-bin\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506209 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-cnibin\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506237 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-os-release\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506263 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2c5353a5-c388-4046-bb29-8e73352588c2-multus-daemon-config\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506305 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-netns\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506331 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-cni-multus\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506355 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-conf-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506381 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-kubelet\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506430 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-cni-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506494 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506603 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-multus-certs\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.505936 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.506691 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:06.006668754 +0000 UTC m=+104.522466351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506765 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/2c5353a5-c388-4046-bb29-8e73352588c2-cni-binary-copy\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.507311 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/2c5353a5-c388-4046-bb29-8e73352588c2-multus-daemon-config\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.507336 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-conf-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.507388 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-netns\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.505813 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-rootfs\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.507980 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-cni-multus\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508011 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-kubelet\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508118 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-run-k8s-cni-cncf-io\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508329 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-host-var-lib-cni-bin\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508398 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-socket-dir-parent\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508432 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-cnibin\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508498 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-os-release\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508573 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/2c5353a5-c388-4046-bb29-8e73352588c2-multus-cni-dir\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.508708 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-mcd-auth-proxy-config\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.510652 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-proxy-tls\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.506511 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512705 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512725 4708 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512739 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512752 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512766 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512782 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512796 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512838 4708 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512947 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512962 4708 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512975 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.512988 4708 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513001 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513015 4708 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513028 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513042 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513055 4708 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513069 4708 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513082 4708 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513100 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513114 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513129 4708 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513142 4708 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513155 4708 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513169 4708 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513193 4708 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513209 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513225 4708 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513240 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513253 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513267 4708 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513279 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513292 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513305 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513320 4708 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513333 4708 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513367 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513382 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513397 4708 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513409 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513423 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513436 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513449 4708 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513463 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513476 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513489 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513503 4708 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513516 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513531 4708 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513544 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513558 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513571 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513586 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513600 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513616 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513628 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513635 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513675 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513705 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513730 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513748 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.513643 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514112 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514141 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514160 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514176 4708 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514192 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514211 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514230 4708 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514246 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514264 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514280 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514295 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514312 4708 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514331 4708 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514348 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514365 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514381 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514397 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514412 4708 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514427 4708 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514442 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514460 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514476 4708 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514493 4708 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514507 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514522 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514538 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514553 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514570 4708 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514586 4708 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514603 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514619 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514634 4708 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514654 4708 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514671 4708 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514737 4708 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514766 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514783 4708 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514802 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514820 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514836 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514875 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514892 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514909 4708 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514926 4708 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.514942 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.526516 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtx74\" (UniqueName: \"kubernetes.io/projected/2c5353a5-c388-4046-bb29-8e73352588c2-kube-api-access-vtx74\") pod \"multus-p6n6j\" (UID: \"2c5353a5-c388-4046-bb29-8e73352588c2\") " pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.529697 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wgq8g\" (UniqueName: \"kubernetes.io/projected/79b58c0b-8d12-4391-999c-9689f9488f46-kube-api-access-wgq8g\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.530355 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zf88c\" (UniqueName: \"kubernetes.io/projected/ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0-kube-api-access-zf88c\") pod \"machine-config-daemon-kvxg2\" (UID: \"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\") " pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.542561 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Feb 27 16:55:05 crc kubenswrapper[4708]: W0227 16:55:05.568674 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-3272ba4efec6c762db3bd997c621fc62e70e80c21fdf683f5910396404d6ef29 WatchSource:0}: Error finding container 3272ba4efec6c762db3bd997c621fc62e70e80c21fdf683f5910396404d6ef29: Status 404 returned error can't find the container with id 3272ba4efec6c762db3bd997c621fc62e70e80c21fdf683f5910396404d6ef29 Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.602285 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-hz8lb" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.616455 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.616488 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.616502 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.616522 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.616539 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.630565 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.638315 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-9s7tp" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.647171 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-p6n6j" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.687760 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" event={"ID":"5763b282-e978-499f-a8e2-5b7ed78d691e","Type":"ContainerStarted","Data":"a216e4f8f84b20e3d0dcf80fc06f8175dd047a501502e5279eac414d90111d1d"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.689469 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerStarted","Data":"93b4bddc9c807c71c4e8b529d150e73ab3e467bdb32ab63f345eb8dc5aef65bd"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.692717 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.692751 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.692764 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"e8d2ba2b01bbce8e52fe39b558b0e4797b9cdf71c59f4131daff236a78ef982c"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.698958 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hz8lb" event={"ID":"6de7f119-b85b-44ae-a478-443eca219825","Type":"ContainerStarted","Data":"36dbc1aa21e945d95a3614ffb9d3d6f1e4d7fe9d413c8907a729ea27562fad68"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.708255 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.715254 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.717145 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"3272ba4efec6c762db3bd997c621fc62e70e80c21fdf683f5910396404d6ef29"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.724134 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.724353 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.724372 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.724391 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.724403 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.725336 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.725677 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"abd3dd49df3569599d00aa9818e48f852ebecabbeca1e57388f19cf06f313bd5"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.727863 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b" exitCode=0 Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.727916 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.727951 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"a2ac0b2b7356518d9bce46ade5ea9cc63686575e3580ef47fe4b0f4b75113091"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.736501 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.745690 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.752821 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.763058 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.778901 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.826237 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.826289 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.826323 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826451 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826485 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826499 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826555 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:06.826538755 +0000 UTC m=+105.342336342 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826622 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826676 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826701 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826781 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826791 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:06.826759962 +0000 UTC m=+105.342557579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.826914 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:06.826836724 +0000 UTC m=+105.342634301 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.831613 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.832653 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.832681 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.832692 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.832707 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.832716 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.850880 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.866272 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.879338 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.888500 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.897745 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.908395 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.918487 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.926676 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.927814 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.927956 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:55:06.927937015 +0000 UTC m=+105.443734602 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.928072 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.928233 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: E0227 16:55:05.928278 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:06.928271655 +0000 UTC m=+105.444069242 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.935218 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.935240 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.935249 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.935265 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.935275 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:05Z","lastTransitionTime":"2026-02-27T16:55:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.937382 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.951831 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.969309 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.976821 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:05 crc kubenswrapper[4708]: I0227 16:55:05.985225 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:05.999953 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.011393 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.021265 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.029061 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.029196 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.029263 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:07.029245563 +0000 UTC m=+105.545043150 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.030912 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.037160 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.037199 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.037209 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.037225 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.037238 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.041355 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.054353 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.140549 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.140607 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.140625 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.140649 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.140667 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.232540 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.233309 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.234471 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.235130 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.236249 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.236804 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.237938 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.240291 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.241733 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.242642 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.242764 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.242876 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.242975 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.243051 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.244780 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.246097 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.248379 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.249034 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.249541 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.250065 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.250567 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.251181 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.251612 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.253247 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.253808 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.254288 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.255252 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.255662 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.256726 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.257214 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.258282 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.258893 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.259807 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.260397 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.261365 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.261818 4708 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.261949 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.263540 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.264545 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.265359 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.266840 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.267483 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.268621 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.269262 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.270225 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.270690 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.271772 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.272425 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.273429 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.273891 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.275001 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.275578 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.276602 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.277214 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.278139 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.278599 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.279257 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.280281 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.280804 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.345338 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.345368 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.345377 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.345390 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.345400 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.449105 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.449192 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.449212 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.449240 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.449258 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.551597 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.551664 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.551694 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.551730 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.551753 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.654612 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.654665 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.654685 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.654713 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.654733 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.735050 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.737987 4708 generic.go:334] "Generic (PLEG): container finished" podID="9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b" containerID="aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992" exitCode=0 Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.738050 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerDied","Data":"aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.740147 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerStarted","Data":"74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.740392 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerStarted","Data":"bb09d4913a7b93d69d0bf0bfae127fb27dd11d2c2014e7908ffa4dd801578b2a"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.742962 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.743013 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.743055 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"6e46aeddcf12dedeed73bc0d48fe2595f937cbd1909dbd7ad42d3fa46f26cede"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.751447 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.751496 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.751517 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.751537 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.751553 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.751570 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.753560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" event={"ID":"5763b282-e978-499f-a8e2-5b7ed78d691e","Type":"ContainerStarted","Data":"5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.753657 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" event={"ID":"5763b282-e978-499f-a8e2-5b7ed78d691e","Type":"ContainerStarted","Data":"068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.756169 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9s7tp" event={"ID":"ca723997-3668-4afc-afdf-64ae7404b8ba","Type":"ContainerStarted","Data":"a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.762728 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-9s7tp" event={"ID":"ca723997-3668-4afc-afdf-64ae7404b8ba","Type":"ContainerStarted","Data":"58639946e4d2cd13da29154dd9d2bcfb4311dcde286f30dab3d0afc5b1c53863"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.763608 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.763745 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.763797 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.763840 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.763917 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.765541 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-hz8lb" event={"ID":"6de7f119-b85b-44ae-a478-443eca219825","Type":"ContainerStarted","Data":"cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.767028 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.790161 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.813046 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.835505 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.837261 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.837363 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837561 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837622 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837668 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:08.83764383 +0000 UTC m=+107.353441437 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837674 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837699 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837783 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837796 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:08.837769134 +0000 UTC m=+107.353566761 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837810 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837831 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.837924 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:08.837902847 +0000 UTC m=+107.353700474 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.837636 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.849495 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.866671 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.866730 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.866747 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.866774 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.866791 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.873713 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.893042 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.919797 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.935024 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.945725 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.945888 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.946079 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:55:08.946055592 +0000 UTC m=+107.461853189 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.946173 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: E0227 16:55:06.946272 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:08.946249527 +0000 UTC m=+107.462047154 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.956996 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.972014 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.972071 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.972092 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.972118 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.972138 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:06Z","lastTransitionTime":"2026-02-27T16:55:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.978223 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:06 crc kubenswrapper[4708]: I0227 16:55:06.996903 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:06Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.010891 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.024442 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.041166 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.046834 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:07 crc kubenswrapper[4708]: E0227 16:55:07.046996 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:07 crc kubenswrapper[4708]: E0227 16:55:07.047086 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:09.04706773 +0000 UTC m=+107.562865317 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.060129 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.074727 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.074782 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.074799 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.074824 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.074843 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.077706 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.096228 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.110688 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.123432 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.137054 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.155621 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.176613 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.177662 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.177706 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.177715 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.177732 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.177744 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.202276 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.219451 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.227843 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.227950 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.227963 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:07 crc kubenswrapper[4708]: E0227 16:55:07.228031 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.227949 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:07 crc kubenswrapper[4708]: E0227 16:55:07.228122 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:07 crc kubenswrapper[4708]: E0227 16:55:07.228275 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:07 crc kubenswrapper[4708]: E0227 16:55:07.228389 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.230198 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.246428 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.269062 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.279923 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.279961 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.279971 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.279986 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.279996 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.382727 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.383101 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.383110 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.383125 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.383134 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.486217 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.486290 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.486308 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.486332 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.486350 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.588955 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.588999 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.589011 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.589029 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.589041 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.692115 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.692175 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.692189 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.692207 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.692219 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.771211 4708 generic.go:334] "Generic (PLEG): container finished" podID="9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b" containerID="23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82" exitCode=0 Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.771372 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerDied","Data":"23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.792833 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.797273 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.797310 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.797325 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.797344 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.797360 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.821325 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.843907 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.863865 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.897102 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.899248 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.899278 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.899289 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.899305 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.899316 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:07Z","lastTransitionTime":"2026-02-27T16:55:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.914573 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.936299 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.955615 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.970629 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:07 crc kubenswrapper[4708]: I0227 16:55:07.989294 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:07Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.001831 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.001887 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.001900 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.001922 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.001935 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.005343 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.018049 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.033948 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.047064 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.104499 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.104556 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.104580 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.104607 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.104627 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.207725 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.207792 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.207818 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.207871 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.207891 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.311567 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.311609 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.311619 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.311636 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.311648 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.415161 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.415223 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.415242 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.415271 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.415292 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.519019 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.519080 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.519097 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.519123 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.519142 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.621886 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.621934 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.621951 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.621979 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.621998 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.724971 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.725032 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.725050 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.725075 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.725097 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.778748 4708 generic.go:334] "Generic (PLEG): container finished" podID="9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b" containerID="399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4" exitCode=0 Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.778985 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerDied","Data":"399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.801449 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.821587 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.827958 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.828014 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.828033 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.828059 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.828078 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.842816 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.862097 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.868600 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.868693 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.868774 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.868961 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869056 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:12.869033977 +0000 UTC m=+111.384831594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869287 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869408 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869483 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869620 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869666 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869688 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869701 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:12.869621023 +0000 UTC m=+111.385418830 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.869768 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:12.869745617 +0000 UTC m=+111.385543234 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.885012 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.909075 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.931812 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.932729 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.932807 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.932826 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.932916 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.932947 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:08Z","lastTransitionTime":"2026-02-27T16:55:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.955565 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.970151 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.970330 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:55:12.970293232 +0000 UTC m=+111.486090849 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.970544 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.970705 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: E0227 16:55:08.970781 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:12.970765376 +0000 UTC m=+111.486563003 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.975332 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:08 crc kubenswrapper[4708]: I0227 16:55:08.992756 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:08Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.011775 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.032833 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.035740 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.035794 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.035814 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.035841 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.035891 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.056430 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.070970 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.071238 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:09 crc kubenswrapper[4708]: E0227 16:55:09.071419 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:09 crc kubenswrapper[4708]: E0227 16:55:09.071506 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:13.071481596 +0000 UTC m=+111.587279223 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.138749 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.138806 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.138826 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.138876 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.138895 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.227706 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.227801 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:09 crc kubenswrapper[4708]: E0227 16:55:09.228270 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.227831 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.227828 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:09 crc kubenswrapper[4708]: E0227 16:55:09.228459 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:09 crc kubenswrapper[4708]: E0227 16:55:09.228612 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:09 crc kubenswrapper[4708]: E0227 16:55:09.228901 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.242067 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.242120 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.242137 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.242159 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.242178 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.345507 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.345590 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.345620 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.345654 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.345677 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.449146 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.449212 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.449230 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.449257 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.449275 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.551840 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.551925 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.551942 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.551969 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.551987 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.657701 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.657749 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.657761 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.657783 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.657795 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.763912 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.763985 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.764011 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.764042 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.764061 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.788684 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.790759 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.794936 4708 generic.go:334] "Generic (PLEG): container finished" podID="9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b" containerID="d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617" exitCode=0 Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.795035 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerDied","Data":"d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.817039 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.846197 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.865669 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.870264 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.870334 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.870354 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.870384 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.870402 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.884157 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.904729 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.916302 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.931763 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.949378 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.967049 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.974626 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.974692 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.974711 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.974736 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.974755 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:09Z","lastTransitionTime":"2026-02-27T16:55:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:09 crc kubenswrapper[4708]: I0227 16:55:09.991025 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:09Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.013968 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.026189 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.042285 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.058109 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.076537 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.078094 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.078154 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.078174 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.078200 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.078220 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.093107 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.110405 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.128473 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.148357 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.166728 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.181127 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.183355 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.183420 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.183439 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.183467 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.183487 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.199785 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.222162 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.248290 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.268005 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.285670 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.285749 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.285776 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.285828 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.285886 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.288411 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.307024 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.328029 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.388356 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.388398 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.388411 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.388428 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.388439 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.491218 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.491285 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.491313 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.491347 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.491374 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.594638 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.594710 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.594733 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.594762 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.594783 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.697878 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.697996 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.698011 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.698030 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.698043 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.801377 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.801468 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.801493 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.801527 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.801553 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.816524 4708 generic.go:334] "Generic (PLEG): container finished" podID="9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b" containerID="3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1" exitCode=0 Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.817428 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerDied","Data":"3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.848344 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.867086 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.882294 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.895479 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.906753 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.906815 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.906834 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.906887 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.906910 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:10Z","lastTransitionTime":"2026-02-27T16:55:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.913225 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.935993 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.958523 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.978146 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:10 crc kubenswrapper[4708]: I0227 16:55:10.990047 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:10Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.002306 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.009559 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.009617 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.009636 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.009664 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.009682 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.020003 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.036267 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.048816 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.064228 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.112993 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.113097 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.113138 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.113167 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.113186 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.225174 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.225623 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.225642 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.225669 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.225692 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.227569 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.227646 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.227665 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.227681 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:11 crc kubenswrapper[4708]: E0227 16:55:11.227715 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:11 crc kubenswrapper[4708]: E0227 16:55:11.227809 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:11 crc kubenswrapper[4708]: E0227 16:55:11.227968 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:11 crc kubenswrapper[4708]: E0227 16:55:11.228085 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.328800 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.328885 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.328910 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.328935 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.328953 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.432064 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.432131 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.432150 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.432175 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.432192 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.535184 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.535237 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.535250 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.535269 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.535281 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.638465 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.638546 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.638570 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.638598 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.638617 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.742402 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.742455 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.742471 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.742496 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.742511 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.825688 4708 generic.go:334] "Generic (PLEG): container finished" podID="9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b" containerID="e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641" exitCode=0 Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.825794 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerDied","Data":"e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.835485 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.835900 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.835954 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.835974 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.847458 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.847497 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.847506 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.847544 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.847553 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.851712 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.882897 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.892323 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.892334 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.905103 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.925694 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.947071 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.950599 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.950645 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.950662 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.950686 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.950703 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:11Z","lastTransitionTime":"2026-02-27T16:55:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.961152 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.974862 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:11 crc kubenswrapper[4708]: I0227 16:55:11.991967 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:11Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.046875 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.054808 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.054830 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.054839 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.054865 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.054875 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.062970 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.080793 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.098605 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.113702 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.132382 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.149308 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.157193 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.157245 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.157265 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.157291 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.157310 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.167520 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.179633 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.193927 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.211399 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.230633 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.249704 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.260134 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.260333 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.260536 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.260697 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.260881 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.267068 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.288548 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.313254 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.346230 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.363890 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.363957 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.363984 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.364021 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.364049 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.367678 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.386766 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.400948 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.415395 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.429189 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.446173 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.459954 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.470283 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.470329 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.470345 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.470365 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.470379 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.485534 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.512408 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.533887 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.553054 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.567598 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.573385 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.573443 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.573465 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.573493 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.573515 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.588937 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.614528 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.645668 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.662953 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.676733 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.676950 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.677090 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.677257 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.677392 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.682946 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.780598 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.780656 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.780676 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.780702 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.780721 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.846791 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" event={"ID":"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b","Type":"ContainerStarted","Data":"e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.867632 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.883191 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.883909 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.883950 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.883980 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.884001 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.885730 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.908640 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.921973 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.922060 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.922140 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922211 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922242 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922256 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922284 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922293 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922309 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922331 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922394 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:20.922361987 +0000 UTC m=+119.438159624 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922439 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:20.922413989 +0000 UTC m=+119.438211616 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:12 crc kubenswrapper[4708]: E0227 16:55:12.922467 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:20.92245332 +0000 UTC m=+119.438250957 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.928825 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.947050 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.964528 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.987206 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.987269 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.987287 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.987313 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.987331 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:12Z","lastTransitionTime":"2026-02-27T16:55:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:12 crc kubenswrapper[4708]: I0227 16:55:12.988733 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.023671 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.024743 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:55:21.023897931 +0000 UTC m=+119.539695548 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.024938 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.025474 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.025570 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:21.025545558 +0000 UTC m=+119.541343185 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.027525 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.054531 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.076581 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.091601 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.091643 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.091661 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.091687 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.091704 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.133020 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.133213 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.133451 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.133571 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:21.133544768 +0000 UTC m=+119.649342365 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.153452 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.175157 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.194909 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.194957 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.194969 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.194988 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.195001 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.195373 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:13Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.228289 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.228330 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.228452 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.228485 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.228528 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.228720 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.228884 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.228938 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.244323 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.244372 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.244553 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.297388 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.297418 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.297430 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.297449 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.297461 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.399202 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.399236 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.399246 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.399260 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.399269 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.501973 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.502037 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.502052 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.502092 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.502107 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.604258 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.604329 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.604349 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.604374 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.604393 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.707559 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.707633 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.707657 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.707683 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.707701 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.810204 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.810241 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.810251 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.810266 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.810276 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.851185 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:55:13 crc kubenswrapper[4708]: E0227 16:55:13.851331 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.912562 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.912598 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.912608 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.912621 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:13 crc kubenswrapper[4708]: I0227 16:55:13.912633 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:13Z","lastTransitionTime":"2026-02-27T16:55:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.015628 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.015679 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.015696 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.015720 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.015739 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.119230 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.119302 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.119321 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.119346 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.119364 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.223039 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.223096 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.223114 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.223139 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.223157 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.327707 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.327765 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.327783 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.327809 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.327828 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.363623 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.363668 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.363690 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.363715 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.363734 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: E0227 16:55:14.386519 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.392051 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.392132 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.392152 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.392179 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.392203 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: E0227 16:55:14.411948 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.416997 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.417055 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.417073 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.417100 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.417118 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: E0227 16:55:14.436952 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.443235 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.443317 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.443338 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.443368 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.443389 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: E0227 16:55:14.465402 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.470588 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.470633 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.470645 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.470667 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.470682 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: E0227 16:55:14.490421 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: E0227 16:55:14.490674 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.492919 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.492967 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.492985 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.493014 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.493035 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.595357 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.595448 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.595474 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.595505 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.595528 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.699165 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.699227 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.699247 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.699281 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.699301 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.802006 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.802347 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.802555 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.802731 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.802930 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.862837 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/0.log" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.868488 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3" exitCode=1 Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.868549 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.869769 4708 scope.go:117] "RemoveContainer" containerID="a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.888304 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.908775 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.909532 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.909587 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.909610 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.909651 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.909675 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:14Z","lastTransitionTime":"2026-02-27T16:55:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.939777 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.963565 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.983686 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:14 crc kubenswrapper[4708]: I0227 16:55:14.999757 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:14Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.015414 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.015463 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.015495 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.015523 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.015543 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.019133 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.041306 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.061593 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.081474 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.099460 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.121201 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.121250 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.121273 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.121302 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.121320 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.122435 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.145300 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.176764 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331332 6571 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:14.331435 6571 factory.go:656] Stopping watch factory\\\\nI0227 16:55:14.331462 6571 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:14.331467 6571 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:55:14.331523 6571 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331582 6571 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:55:14.331596 6571 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331621 6571 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:55:14.331826 6571 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.194239 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.224761 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.224889 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.224918 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.224955 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.224980 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.228011 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.228081 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.228086 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.228009 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:15 crc kubenswrapper[4708]: E0227 16:55:15.228200 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:15 crc kubenswrapper[4708]: E0227 16:55:15.228356 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:15 crc kubenswrapper[4708]: E0227 16:55:15.228472 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:15 crc kubenswrapper[4708]: E0227 16:55:15.228632 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.328716 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.328813 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.328839 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.328912 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.328939 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.431712 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.431782 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.431801 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.431830 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.431888 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.534711 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.534763 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.534773 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.534868 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.534880 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.638111 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.638162 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.638178 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.638210 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.638224 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.741363 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.741425 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.741447 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.741474 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.741534 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.844824 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.844936 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.844955 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.844986 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.845008 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.876309 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/0.log" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.880335 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.881232 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.898484 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.916530 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.937576 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.949235 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.949326 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.949912 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.949963 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.950051 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:15Z","lastTransitionTime":"2026-02-27T16:55:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.965766 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.981836 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:15 crc kubenswrapper[4708]: I0227 16:55:15.994610 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:15Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.015704 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.040371 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.052721 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.052782 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.052801 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.053038 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.053061 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.070639 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331332 6571 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:14.331435 6571 factory.go:656] Stopping watch factory\\\\nI0227 16:55:14.331462 6571 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:14.331467 6571 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:55:14.331523 6571 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331582 6571 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:55:14.331596 6571 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331621 6571 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:55:14.331826 6571 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.089220 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.110520 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.128584 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.149179 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.156615 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.156674 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.156693 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.156726 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.156750 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.169063 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.185551 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.260208 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.260266 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.260286 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.260313 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.260332 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.363885 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.363957 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.363975 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.364002 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.364020 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.466716 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.466765 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.466783 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.466807 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.466824 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.569870 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.569948 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.569969 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.569997 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.570018 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.704797 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.704890 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.704905 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.704926 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.704939 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.807337 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.807383 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.807395 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.807414 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.807425 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.889736 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/1.log" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.890344 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/0.log" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.893383 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f" exitCode=1 Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.893422 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.893458 4708 scope.go:117] "RemoveContainer" containerID="a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.894577 4708 scope.go:117] "RemoveContainer" containerID="36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f" Feb 27 16:55:16 crc kubenswrapper[4708]: E0227 16:55:16.894972 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.909307 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.909341 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.909353 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.909368 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.909378 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:16Z","lastTransitionTime":"2026-02-27T16:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.916391 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.938396 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.955970 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.973582 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:16 crc kubenswrapper[4708]: I0227 16:55:16.992516 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.012523 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.012560 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.012572 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.012588 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.012601 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.013380 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.031567 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.046493 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.059472 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.075702 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.098518 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.115445 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.115496 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.115514 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.115537 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.115554 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.128964 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a2d038e49bb86f1716443bd69f020ce861f7598528a940b2b59139bd86de5ba3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:14Z\\\",\\\"message\\\":\\\"ctor.go:311] Stopping reflector *v1.NetworkPolicy (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331332 6571 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:14.331435 6571 factory.go:656] Stopping watch factory\\\\nI0227 16:55:14.331462 6571 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:14.331467 6571 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:55:14.331523 6571 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331582 6571 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:55:14.331596 6571 reflector.go:311] Stopping reflector *v1.Node (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:14.331621 6571 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:55:14.331826 6571 reflector.go:311] Stopping reflector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:15Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:15.962800 6731 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:55:15.962834 6731 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:55:15.962887 6731 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:55:15.962997 6731 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 16:55:15.963143 6731 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:55:15.963176 6731 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:55:15.963200 6731 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:15.963236 6731 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:55:15.963263 6731 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:55:15.963280 6731 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:55:15.963295 6731 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:15.963317 6731 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0227 16:55:15.963366 6731 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:55:15.963411 6731 factory.go:656] Stopping watch factory\\\\nI0227 16:55:15.963432 6731 ovnkube.go:599] Stopped ovnkube\\\\nI0227 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.146424 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.164538 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.184689 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.218732 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.218791 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.218808 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.218833 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.218874 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.227834 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.227883 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.227903 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.228075 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:17 crc kubenswrapper[4708]: E0227 16:55:17.228221 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:17 crc kubenswrapper[4708]: E0227 16:55:17.228525 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:17 crc kubenswrapper[4708]: E0227 16:55:17.228622 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:17 crc kubenswrapper[4708]: E0227 16:55:17.228685 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.242795 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.321791 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.321931 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.321954 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.322024 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.322044 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.424567 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.424626 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.424643 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.424666 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.424684 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.527531 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.527579 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.527595 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.527621 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.527643 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.630459 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.630509 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.630526 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.630548 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.630567 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.733631 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.733693 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.733712 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.733739 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.733759 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.836677 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.836743 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.836761 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.836786 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.836805 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.900790 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/1.log" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.907313 4708 scope.go:117] "RemoveContainer" containerID="36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f" Feb 27 16:55:17 crc kubenswrapper[4708]: E0227 16:55:17.910427 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.931225 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.940379 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.940600 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.940628 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.940662 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.940686 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:17Z","lastTransitionTime":"2026-02-27T16:55:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.948055 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.966614 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:17 crc kubenswrapper[4708]: I0227 16:55:17.988158 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.009670 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.028357 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.044164 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.044256 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.044278 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.044303 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.044354 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.046078 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.061397 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.080652 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.103318 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.143196 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:15Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:15.962800 6731 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:55:15.962834 6731 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:55:15.962887 6731 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:55:15.962997 6731 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 16:55:15.963143 6731 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:55:15.963176 6731 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:55:15.963200 6731 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:15.963236 6731 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:55:15.963263 6731 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:55:15.963280 6731 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:55:15.963295 6731 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:15.963317 6731 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0227 16:55:15.963366 6731 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:55:15.963411 6731 factory.go:656] Stopping watch factory\\\\nI0227 16:55:15.963432 6731 ovnkube.go:599] Stopped ovnkube\\\\nI0227 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.148355 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.148607 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.148769 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.148949 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.149083 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.170876 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.190203 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.207221 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.247733 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.251319 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.251379 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.251393 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.251410 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.251834 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.269311 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:18Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.354900 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.354975 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.354990 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.355025 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.355041 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.457432 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.457489 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.457509 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.457531 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.457550 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.561089 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.561428 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.561571 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.561717 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.561838 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.665129 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.665213 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.665230 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.665253 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.665269 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.768039 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.768129 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.768153 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.768187 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.768211 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.871520 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.871831 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.872017 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.872159 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.872317 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.975692 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.975761 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.975782 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.975814 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:18 crc kubenswrapper[4708]: I0227 16:55:18.975833 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:18Z","lastTransitionTime":"2026-02-27T16:55:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.078641 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.078914 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.079057 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.079193 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.079349 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.182933 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.183028 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.183047 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.183068 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.183084 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.227943 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.228012 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.228059 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:19 crc kubenswrapper[4708]: E0227 16:55:19.228122 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:19 crc kubenswrapper[4708]: E0227 16:55:19.228228 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:19 crc kubenswrapper[4708]: E0227 16:55:19.228411 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.228639 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:19 crc kubenswrapper[4708]: E0227 16:55:19.228972 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.285973 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.286044 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.286066 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.286100 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.286122 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.389368 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.389428 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.389445 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.389471 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.389492 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.492048 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.492130 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.492148 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.492179 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.492208 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.595158 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.595219 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.595239 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.595263 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.595281 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.698191 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.698291 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.698311 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.698377 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.698402 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.801231 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.801289 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.801306 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.801331 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.801352 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.903927 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.903989 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.904030 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.904056 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:19 crc kubenswrapper[4708]: I0227 16:55:19.904076 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:19Z","lastTransitionTime":"2026-02-27T16:55:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.006697 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.006754 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.006773 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.006803 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.006824 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.110226 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.110284 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.110303 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.110327 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.110345 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.213117 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.213198 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.213217 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.213253 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.213277 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.316347 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.316405 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.316422 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.316449 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.316466 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.419563 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.419645 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.419665 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.419749 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.419775 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.523441 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.523802 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.523820 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.523870 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.523892 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.626615 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.626663 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.626680 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.626700 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.626717 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.731466 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.731568 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.731586 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.731614 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.731636 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.835005 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.835077 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.835096 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.835129 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.835150 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.924032 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.924107 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.924163 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.924323 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.924397 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:36.924376154 +0000 UTC m=+135.440173771 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.924772 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.924968 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.925097 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.925315 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:36.92528869 +0000 UTC m=+135.441086307 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.924914 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.925618 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.925737 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:20 crc kubenswrapper[4708]: E0227 16:55:20.925945 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:36.925926758 +0000 UTC m=+135.441724385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.938171 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.938226 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.938247 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.938274 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:20 crc kubenswrapper[4708]: I0227 16:55:20.938294 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:20Z","lastTransitionTime":"2026-02-27T16:55:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.024529 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.025602 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:55:37.025552337 +0000 UTC m=+135.541349984 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.041135 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.041199 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.041218 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.041247 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.041293 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.125409 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.125644 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.125731 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:37.125707291 +0000 UTC m=+135.641504918 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.144666 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.144714 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.144734 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.144758 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.144777 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.226340 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.226532 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.226632 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:55:37.226606257 +0000 UTC m=+135.742404044 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.227306 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.227362 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.227473 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.227527 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.227523 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.227687 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.227938 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:21 crc kubenswrapper[4708]: E0227 16:55:21.228224 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.249391 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.249475 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.249495 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.249516 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.249563 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.353049 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.353104 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.353125 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.353148 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.353164 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.456227 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.456296 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.456314 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.456343 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.456363 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.559943 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.560037 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.560057 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.560624 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.560693 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.664675 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.664744 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.664763 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.664791 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.664809 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.767683 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.767748 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.767770 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.767800 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.767821 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.870876 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.870937 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.870956 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.870980 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.870995 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.973600 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.973659 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.973679 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.973704 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:21 crc kubenswrapper[4708]: I0227 16:55:21.973722 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:21Z","lastTransitionTime":"2026-02-27T16:55:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.076931 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.076976 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.076999 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.077024 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.077045 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:22Z","lastTransitionTime":"2026-02-27T16:55:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:22 crc kubenswrapper[4708]: E0227 16:55:22.177653 4708 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.251630 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.274782 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.305437 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:15Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:15.962800 6731 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:55:15.962834 6731 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:55:15.962887 6731 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:55:15.962997 6731 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 16:55:15.963143 6731 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:55:15.963176 6731 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:55:15.963200 6731 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:15.963236 6731 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:55:15.963263 6731 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:55:15.963280 6731 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:55:15.963295 6731 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:15.963317 6731 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0227 16:55:15.963366 6731 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:55:15.963411 6731 factory.go:656] Stopping watch factory\\\\nI0227 16:55:15.963432 6731 ovnkube.go:599] Stopped ovnkube\\\\nI0227 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: E0227 16:55:22.318535 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.326290 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.346250 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.365744 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.380484 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.397988 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.419500 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(f4b27818a5e8e43d0dc095d08835c792)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.437459 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.453361 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.473244 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.492234 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.510244 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.532473 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:22 crc kubenswrapper[4708]: I0227 16:55:22.552379 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:23 crc kubenswrapper[4708]: I0227 16:55:23.228090 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:23 crc kubenswrapper[4708]: I0227 16:55:23.228166 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:23 crc kubenswrapper[4708]: I0227 16:55:23.228257 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:23 crc kubenswrapper[4708]: I0227 16:55:23.228094 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:23 crc kubenswrapper[4708]: E0227 16:55:23.228268 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:23 crc kubenswrapper[4708]: E0227 16:55:23.228405 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:23 crc kubenswrapper[4708]: E0227 16:55:23.228508 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:23 crc kubenswrapper[4708]: E0227 16:55:23.228590 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.745660 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.745724 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.745742 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.745767 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.745786 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:24Z","lastTransitionTime":"2026-02-27T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:24 crc kubenswrapper[4708]: E0227 16:55:24.766073 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.771063 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.771122 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.771140 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.771169 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.771187 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:24Z","lastTransitionTime":"2026-02-27T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:24 crc kubenswrapper[4708]: E0227 16:55:24.790663 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.795332 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.795612 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.795764 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.795940 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.796093 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:24Z","lastTransitionTime":"2026-02-27T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:24 crc kubenswrapper[4708]: E0227 16:55:24.815347 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.820933 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.821191 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.821332 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.821469 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.821610 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:24Z","lastTransitionTime":"2026-02-27T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:24 crc kubenswrapper[4708]: E0227 16:55:24.840402 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.845021 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.845105 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.845125 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.845145 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:24 crc kubenswrapper[4708]: I0227 16:55:24.845162 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:24Z","lastTransitionTime":"2026-02-27T16:55:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:24 crc kubenswrapper[4708]: E0227 16:55:24.866994 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:24Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:24Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:24 crc kubenswrapper[4708]: E0227 16:55:24.867211 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.227597 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.227654 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:25 crc kubenswrapper[4708]: E0227 16:55:25.227793 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.227932 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.227964 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:25 crc kubenswrapper[4708]: E0227 16:55:25.228128 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:25 crc kubenswrapper[4708]: E0227 16:55:25.228294 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:25 crc kubenswrapper[4708]: E0227 16:55:25.228799 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.229100 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.940410 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.943633 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3"} Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.944212 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.962787 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.977780 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:25 crc kubenswrapper[4708]: I0227 16:55:25.991941 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:25Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.010619 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.024139 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.040143 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.058893 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.081302 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.113205 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:15Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:15.962800 6731 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:55:15.962834 6731 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:55:15.962887 6731 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:55:15.962997 6731 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 16:55:15.963143 6731 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:55:15.963176 6731 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:55:15.963200 6731 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:15.963236 6731 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:55:15.963263 6731 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:55:15.963280 6731 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:55:15.963295 6731 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:15.963317 6731 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0227 16:55:15.963366 6731 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:55:15.963411 6731 factory.go:656] Stopping watch factory\\\\nI0227 16:55:15.963432 6731 ovnkube.go:599] Stopped ovnkube\\\\nI0227 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.132676 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.152819 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.173014 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.192462 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.214063 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.235176 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:26 crc kubenswrapper[4708]: I0227 16:55:26.251356 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:26Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:27 crc kubenswrapper[4708]: I0227 16:55:27.227724 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:27 crc kubenswrapper[4708]: I0227 16:55:27.227918 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:27 crc kubenswrapper[4708]: I0227 16:55:27.227936 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:27 crc kubenswrapper[4708]: E0227 16:55:27.228044 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:27 crc kubenswrapper[4708]: E0227 16:55:27.228237 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:27 crc kubenswrapper[4708]: I0227 16:55:27.228339 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:27 crc kubenswrapper[4708]: E0227 16:55:27.228438 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:27 crc kubenswrapper[4708]: E0227 16:55:27.228544 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:27 crc kubenswrapper[4708]: E0227 16:55:27.319797 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:29 crc kubenswrapper[4708]: I0227 16:55:29.228164 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:29 crc kubenswrapper[4708]: I0227 16:55:29.228216 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:29 crc kubenswrapper[4708]: I0227 16:55:29.228257 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:29 crc kubenswrapper[4708]: I0227 16:55:29.228220 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:29 crc kubenswrapper[4708]: E0227 16:55:29.228370 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:29 crc kubenswrapper[4708]: E0227 16:55:29.228443 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:29 crc kubenswrapper[4708]: E0227 16:55:29.228660 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:29 crc kubenswrapper[4708]: E0227 16:55:29.228834 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:31 crc kubenswrapper[4708]: I0227 16:55:31.227423 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:31 crc kubenswrapper[4708]: I0227 16:55:31.227491 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:31 crc kubenswrapper[4708]: I0227 16:55:31.227516 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:31 crc kubenswrapper[4708]: E0227 16:55:31.227616 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:31 crc kubenswrapper[4708]: I0227 16:55:31.227643 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:31 crc kubenswrapper[4708]: E0227 16:55:31.228029 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:31 crc kubenswrapper[4708]: E0227 16:55:31.227914 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:31 crc kubenswrapper[4708]: E0227 16:55:31.227801 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.248267 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.269092 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.299175 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:15Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:15.962800 6731 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:55:15.962834 6731 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:55:15.962887 6731 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:55:15.962997 6731 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 16:55:15.963143 6731 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:55:15.963176 6731 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:55:15.963200 6731 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:15.963236 6731 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:55:15.963263 6731 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:55:15.963280 6731 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:55:15.963295 6731 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:15.963317 6731 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0227 16:55:15.963366 6731 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:55:15.963411 6731 factory.go:656] Stopping watch factory\\\\nI0227 16:55:15.963432 6731 ovnkube.go:599] Stopped ovnkube\\\\nI0227 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.316028 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: E0227 16:55:32.321292 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.336155 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.360470 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.376773 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.394154 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.417128 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.437593 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.453522 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.474166 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.495437 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.515776 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.531429 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:32 crc kubenswrapper[4708]: I0227 16:55:32.554506 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:32Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:33 crc kubenswrapper[4708]: I0227 16:55:33.227395 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:33 crc kubenswrapper[4708]: I0227 16:55:33.227481 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:33 crc kubenswrapper[4708]: E0227 16:55:33.227734 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:33 crc kubenswrapper[4708]: I0227 16:55:33.227978 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:33 crc kubenswrapper[4708]: I0227 16:55:33.228055 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:33 crc kubenswrapper[4708]: E0227 16:55:33.228225 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:33 crc kubenswrapper[4708]: E0227 16:55:33.228875 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:33 crc kubenswrapper[4708]: E0227 16:55:33.229041 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:33 crc kubenswrapper[4708]: I0227 16:55:33.229396 4708 scope.go:117] "RemoveContainer" containerID="36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f" Feb 27 16:55:33 crc kubenswrapper[4708]: I0227 16:55:33.996967 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/1.log" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.001983 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a"} Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.002562 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.017070 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.033967 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.052565 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.070264 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.111133 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.147693 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.164791 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.177912 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.192286 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.209876 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:15Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:15.962800 6731 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:55:15.962834 6731 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:55:15.962887 6731 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:55:15.962997 6731 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 16:55:15.963143 6731 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:55:15.963176 6731 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:55:15.963200 6731 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:15.963236 6731 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:55:15.963263 6731 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:55:15.963280 6731 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:55:15.963295 6731 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:15.963317 6731 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0227 16:55:15.963366 6731 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:55:15.963411 6731 factory.go:656] Stopping watch factory\\\\nI0227 16:55:15.963432 6731 ovnkube.go:599] Stopped ovnkube\\\\nI0227 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.220750 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.234195 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.247144 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.262005 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.274263 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.283460 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.913924 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.913986 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.914004 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.914032 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.914064 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:34Z","lastTransitionTime":"2026-02-27T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:34 crc kubenswrapper[4708]: E0227 16:55:34.934777 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.940202 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.940245 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.940263 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.940286 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.940305 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:34Z","lastTransitionTime":"2026-02-27T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:34 crc kubenswrapper[4708]: E0227 16:55:34.961745 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.972302 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.972548 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.972736 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.973020 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:34 crc kubenswrapper[4708]: I0227 16:55:34.973067 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:34Z","lastTransitionTime":"2026-02-27T16:55:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:34 crc kubenswrapper[4708]: E0227 16:55:34.996354 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:34Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.002335 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.002382 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.002404 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.002429 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.002448 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:35Z","lastTransitionTime":"2026-02-27T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.009764 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/2.log" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.011113 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/1.log" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.015008 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a" exitCode=1 Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.015198 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a"} Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.015386 4708 scope.go:117] "RemoveContainer" containerID="36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.016172 4708 scope.go:117] "RemoveContainer" containerID="5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.016445 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.021994 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.034047 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.034091 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.034108 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.034137 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.034155 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:35Z","lastTransitionTime":"2026-02-27T16:55:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.037108 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.055426 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:35Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.056033 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.058946 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.075835 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.096241 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.118936 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.140043 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.155828 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.174104 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.194140 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.213114 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.227533 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.227577 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.227796 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.228073 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.228226 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.228350 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.228451 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:35 crc kubenswrapper[4708]: E0227 16:55:35.229534 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.233280 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.251517 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.267597 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.288891 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.312099 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:35 crc kubenswrapper[4708]: I0227 16:55:35.342741 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://36b8b66c91a9d417d9a9579aabd326ab9e704223cde4fa981c44fe806bd5396f\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:15Z\\\",\\\"message\\\":\\\"flector *v1.EndpointSlice (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:55:15.962800 6731 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:55:15.962834 6731 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:55:15.962887 6731 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:55:15.962997 6731 handler.go:208] Removed *v1.Node event handler 7\\\\nI0227 16:55:15.963143 6731 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0227 16:55:15.963176 6731 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0227 16:55:15.963200 6731 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0227 16:55:15.963236 6731 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0227 16:55:15.963263 6731 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:55:15.963280 6731 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0227 16:55:15.963295 6731 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0227 16:55:15.963317 6731 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0227 16:55:15.963366 6731 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:55:15.963411 6731 factory.go:656] Stopping watch factory\\\\nI0227 16:55:15.963432 6731 ovnkube.go:599] Stopped ovnkube\\\\nI0227 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:15Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:35Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.022058 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/2.log" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.027231 4708 scope.go:117] "RemoveContainer" containerID="5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a" Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.027495 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.047591 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.069942 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.089629 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.105471 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.125902 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.144397 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.160471 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.176187 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.191843 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.209093 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.228058 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.248889 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.263464 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.279434 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.300498 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.337231 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:36Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.994992 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995275 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995328 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995351 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995430 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:56:08.995404748 +0000 UTC m=+167.511202365 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.995297 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:36 crc kubenswrapper[4708]: I0227 16:55:36.995575 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995711 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995731 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995747 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.995789 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:56:08.995775539 +0000 UTC m=+167.511573156 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.996010 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:36 crc kubenswrapper[4708]: E0227 16:55:36.996074 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:56:08.996059867 +0000 UTC m=+167.511857484 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:55:37 crc kubenswrapper[4708]: I0227 16:55:37.096600 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.096879 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:56:09.096813218 +0000 UTC m=+167.612610935 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.198729 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.198829 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:56:09.198803315 +0000 UTC m=+167.714600942 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:55:37 crc kubenswrapper[4708]: I0227 16:55:37.198578 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:37 crc kubenswrapper[4708]: I0227 16:55:37.227799 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.228006 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:37 crc kubenswrapper[4708]: I0227 16:55:37.228534 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.228639 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:37 crc kubenswrapper[4708]: I0227 16:55:37.228710 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.228786 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:37 crc kubenswrapper[4708]: I0227 16:55:37.228841 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.228969 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:37 crc kubenswrapper[4708]: I0227 16:55:37.300322 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.300487 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.300550 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:56:09.300529734 +0000 UTC m=+167.816327361 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:55:37 crc kubenswrapper[4708]: E0227 16:55:37.322475 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:38 crc kubenswrapper[4708]: I0227 16:55:38.242976 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Feb 27 16:55:39 crc kubenswrapper[4708]: I0227 16:55:39.227785 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:39 crc kubenswrapper[4708]: I0227 16:55:39.227832 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:39 crc kubenswrapper[4708]: I0227 16:55:39.227927 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:39 crc kubenswrapper[4708]: E0227 16:55:39.228110 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:39 crc kubenswrapper[4708]: I0227 16:55:39.228230 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:39 crc kubenswrapper[4708]: E0227 16:55:39.228305 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:39 crc kubenswrapper[4708]: E0227 16:55:39.228467 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:39 crc kubenswrapper[4708]: E0227 16:55:39.228577 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:40 crc kubenswrapper[4708]: I0227 16:55:40.296308 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:40 crc kubenswrapper[4708]: E0227 16:55:40.296510 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:41 crc kubenswrapper[4708]: I0227 16:55:41.227511 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:41 crc kubenswrapper[4708]: I0227 16:55:41.227534 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:41 crc kubenswrapper[4708]: E0227 16:55:41.227679 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:41 crc kubenswrapper[4708]: E0227 16:55:41.227808 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:41 crc kubenswrapper[4708]: I0227 16:55:41.227637 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:41 crc kubenswrapper[4708]: E0227 16:55:41.228055 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.228375 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:42 crc kubenswrapper[4708]: E0227 16:55:42.228613 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.249965 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.270601 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.286058 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.305871 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: E0227 16:55:42.324788 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.343179 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.364053 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.380310 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.396569 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.415534 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.435808 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.456053 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.475635 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.490743 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.511694 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.533606 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.563825 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:42 crc kubenswrapper[4708]: I0227 16:55:42.582573 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:42Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:43 crc kubenswrapper[4708]: I0227 16:55:43.228292 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:43 crc kubenswrapper[4708]: I0227 16:55:43.228331 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:43 crc kubenswrapper[4708]: I0227 16:55:43.228390 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:43 crc kubenswrapper[4708]: E0227 16:55:43.229047 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:43 crc kubenswrapper[4708]: E0227 16:55:43.228822 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:43 crc kubenswrapper[4708]: E0227 16:55:43.229171 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.228020 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:44 crc kubenswrapper[4708]: E0227 16:55:44.228228 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.412136 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.431093 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.450728 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.472091 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.490966 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.506816 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.524310 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.545339 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.568166 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.597217 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.614333 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.635630 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.656490 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.673880 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.694255 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.714480 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.733157 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:44 crc kubenswrapper[4708]: I0227 16:55:44.748921 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:44Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.227639 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.227780 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.227663 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.227962 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.228153 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.228308 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.417312 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.417372 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.417391 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.417415 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.417435 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:45Z","lastTransitionTime":"2026-02-27T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.437919 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.442656 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.442770 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.442828 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.442914 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.442936 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:45Z","lastTransitionTime":"2026-02-27T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.461898 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.467576 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.467628 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.467648 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.467673 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.467689 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:45Z","lastTransitionTime":"2026-02-27T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.487234 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.492073 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.492120 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.492136 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.492160 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.492176 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:45Z","lastTransitionTime":"2026-02-27T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.533310 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.538298 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.538367 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.538385 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.538412 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:45 crc kubenswrapper[4708]: I0227 16:55:45.538431 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:45Z","lastTransitionTime":"2026-02-27T16:55:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.559369 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:45Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:45Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:45 crc kubenswrapper[4708]: E0227 16:55:45.559592 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:55:46 crc kubenswrapper[4708]: I0227 16:55:46.227827 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:46 crc kubenswrapper[4708]: E0227 16:55:46.228078 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:47 crc kubenswrapper[4708]: I0227 16:55:47.227713 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:47 crc kubenswrapper[4708]: I0227 16:55:47.227773 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:47 crc kubenswrapper[4708]: I0227 16:55:47.227833 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:47 crc kubenswrapper[4708]: E0227 16:55:47.228135 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:47 crc kubenswrapper[4708]: E0227 16:55:47.228259 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:47 crc kubenswrapper[4708]: E0227 16:55:47.228405 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:47 crc kubenswrapper[4708]: E0227 16:55:47.326953 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:48 crc kubenswrapper[4708]: I0227 16:55:48.228590 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:48 crc kubenswrapper[4708]: E0227 16:55:48.228988 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:49 crc kubenswrapper[4708]: I0227 16:55:49.227363 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:49 crc kubenswrapper[4708]: I0227 16:55:49.227419 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:49 crc kubenswrapper[4708]: I0227 16:55:49.227468 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:49 crc kubenswrapper[4708]: E0227 16:55:49.227594 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:49 crc kubenswrapper[4708]: E0227 16:55:49.227697 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:49 crc kubenswrapper[4708]: E0227 16:55:49.227802 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:50 crc kubenswrapper[4708]: I0227 16:55:50.228081 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:50 crc kubenswrapper[4708]: E0227 16:55:50.228266 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:51 crc kubenswrapper[4708]: I0227 16:55:51.227642 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:51 crc kubenswrapper[4708]: I0227 16:55:51.227661 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:51 crc kubenswrapper[4708]: I0227 16:55:51.227675 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:51 crc kubenswrapper[4708]: E0227 16:55:51.228131 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:51 crc kubenswrapper[4708]: E0227 16:55:51.228946 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:51 crc kubenswrapper[4708]: E0227 16:55:51.229146 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:51 crc kubenswrapper[4708]: I0227 16:55:51.229467 4708 scope.go:117] "RemoveContainer" containerID="5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a" Feb 27 16:55:51 crc kubenswrapper[4708]: E0227 16:55:51.229807 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.229471 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:52 crc kubenswrapper[4708]: E0227 16:55:52.229683 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.251405 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.273116 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.289691 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.310745 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: E0227 16:55:52.327828 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.331181 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.386751 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.405505 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.423149 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.441638 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.459301 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.484405 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.505589 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.526151 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.546388 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.562659 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.582415 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:52 crc kubenswrapper[4708]: I0227 16:55:52.603662 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:52Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.228331 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.228410 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.228534 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:53 crc kubenswrapper[4708]: E0227 16:55:53.228543 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:53 crc kubenswrapper[4708]: E0227 16:55:53.228690 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:53 crc kubenswrapper[4708]: E0227 16:55:53.228810 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.368200 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/0.log" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.368292 4708 generic.go:334] "Generic (PLEG): container finished" podID="2c5353a5-c388-4046-bb29-8e73352588c2" containerID="74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0" exitCode=1 Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.368347 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerDied","Data":"74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0"} Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.369059 4708 scope.go:117] "RemoveContainer" containerID="74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.392455 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.413435 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.432134 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.452790 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.473661 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.495893 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.511272 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.531137 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.547630 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.568226 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.601361 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.616694 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.634418 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.653053 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.666820 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.681822 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:53 crc kubenswrapper[4708]: I0227 16:55:53.697723 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:53Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.227991 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:54 crc kubenswrapper[4708]: E0227 16:55:54.228168 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.375294 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/0.log" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.375380 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerStarted","Data":"ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7"} Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.398577 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.418457 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.434832 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.452423 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.473632 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.493979 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.513513 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.528412 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.544445 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.563565 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.587422 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.617066 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.634621 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.653000 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.671582 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.689640 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:54 crc kubenswrapper[4708]: I0227 16:55:54.706047 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:54Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.228163 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.228220 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.228369 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.228575 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.228786 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.228972 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.570027 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.570098 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.570117 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.570150 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.570169 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:55Z","lastTransitionTime":"2026-02-27T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.592071 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.598534 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.598597 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.598618 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.598645 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.598663 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:55Z","lastTransitionTime":"2026-02-27T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.619025 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.624265 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.624309 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.624328 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.624353 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.624370 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:55Z","lastTransitionTime":"2026-02-27T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.646181 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.654321 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.654390 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.654430 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.654472 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.654498 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:55Z","lastTransitionTime":"2026-02-27T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.677990 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.684055 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.684103 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.684121 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.684146 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:55:55 crc kubenswrapper[4708]: I0227 16:55:55.684167 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:55:55Z","lastTransitionTime":"2026-02-27T16:55:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.704195 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:55:55Z is after 2025-08-24T17:21:41Z" Feb 27 16:55:55 crc kubenswrapper[4708]: E0227 16:55:55.704415 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:55:56 crc kubenswrapper[4708]: I0227 16:55:56.227786 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:56 crc kubenswrapper[4708]: E0227 16:55:56.228013 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:57 crc kubenswrapper[4708]: I0227 16:55:57.227947 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:57 crc kubenswrapper[4708]: I0227 16:55:57.228036 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:57 crc kubenswrapper[4708]: I0227 16:55:57.227970 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:57 crc kubenswrapper[4708]: E0227 16:55:57.228173 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:55:57 crc kubenswrapper[4708]: E0227 16:55:57.228328 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:57 crc kubenswrapper[4708]: E0227 16:55:57.228510 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:57 crc kubenswrapper[4708]: E0227 16:55:57.329382 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:55:58 crc kubenswrapper[4708]: I0227 16:55:58.228068 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:55:58 crc kubenswrapper[4708]: E0227 16:55:58.228291 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:55:59 crc kubenswrapper[4708]: I0227 16:55:59.227420 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:55:59 crc kubenswrapper[4708]: I0227 16:55:59.227439 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:55:59 crc kubenswrapper[4708]: E0227 16:55:59.227623 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:55:59 crc kubenswrapper[4708]: E0227 16:55:59.227790 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:55:59 crc kubenswrapper[4708]: I0227 16:55:59.227455 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:55:59 crc kubenswrapper[4708]: E0227 16:55:59.227984 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:00 crc kubenswrapper[4708]: I0227 16:56:00.227701 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:00 crc kubenswrapper[4708]: E0227 16:56:00.227914 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:01 crc kubenswrapper[4708]: I0227 16:56:01.228137 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:01 crc kubenswrapper[4708]: I0227 16:56:01.228137 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:01 crc kubenswrapper[4708]: I0227 16:56:01.228185 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:01 crc kubenswrapper[4708]: E0227 16:56:01.228606 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:01 crc kubenswrapper[4708]: E0227 16:56:01.228842 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:01 crc kubenswrapper[4708]: E0227 16:56:01.229149 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:01 crc kubenswrapper[4708]: I0227 16:56:01.245908 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.228141 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:02 crc kubenswrapper[4708]: E0227 16:56:02.229228 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.247890 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.255199 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.276236 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.296448 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.312647 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.329054 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: E0227 16:56:02.331000 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.350324 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.368339 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.386026 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.400342 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.418784 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.440540 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.471453 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.484631 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.496071 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6988ce3f-79a5-4af0-974b-11bf78a0eae1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db669ad6a213f4d2cc324a27db72c0acd31a31110041ec13a3d5f814ec8824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.512461 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.533720 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.552088 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:02 crc kubenswrapper[4708]: I0227 16:56:02.566712 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:02Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.227667 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:03 crc kubenswrapper[4708]: E0227 16:56:03.227789 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.228263 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.228472 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:03 crc kubenswrapper[4708]: E0227 16:56:03.228679 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.229110 4708 scope.go:117] "RemoveContainer" containerID="5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a" Feb 27 16:56:03 crc kubenswrapper[4708]: E0227 16:56:03.229503 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.414230 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/2.log" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.418568 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.419590 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.440765 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.457995 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.482525 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.504664 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.525964 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.557398 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d819db14-697e-4d3e-91db-99528c22f079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db933753890a81185cb51437c74fb549f424d32b14f82bfc23c65c1f03656ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2dfb10bb5fd1ae500cc0cfa9273a5b6d35ebdf1beeb773749e1199a0f6c402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f5d95290f15084ede686f64fe8c3d385247674568c1e1d742fc4e1d19dd4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30dedd3741667e4539dbb93fae6bdf7a12469cabc64c281107dc9c1607cc4aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca78cbe511dd1d30e907cb00a8c308083f86e23e2d8418e20c97b1ab78014ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.570295 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.587615 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.602666 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.616676 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6988ce3f-79a5-4af0-974b-11bf78a0eae1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db669ad6a213f4d2cc324a27db72c0acd31a31110041ec13a3d5f814ec8824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.653087 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.670132 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.688323 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.702381 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.716747 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.731220 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.747790 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.770598 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:03 crc kubenswrapper[4708]: I0227 16:56:03.792560 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:03Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:04 crc kubenswrapper[4708]: I0227 16:56:04.228008 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:04 crc kubenswrapper[4708]: E0227 16:56:04.228204 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.227308 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.227361 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.227360 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.227482 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.227684 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.227789 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.429068 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/3.log" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.430281 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/2.log" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.434291 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" exitCode=1 Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.434340 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.434391 4708 scope.go:117] "RemoveContainer" containerID="5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.436771 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.437215 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.462404 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.479930 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6988ce3f-79a5-4af0-974b-11bf78a0eae1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db669ad6a213f4d2cc324a27db72c0acd31a31110041ec13a3d5f814ec8824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.496782 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.518117 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.537248 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.553420 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.571681 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.591378 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.609942 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.625776 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.641779 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.671651 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d819db14-697e-4d3e-91db-99528c22f079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db933753890a81185cb51437c74fb549f424d32b14f82bfc23c65c1f03656ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2dfb10bb5fd1ae500cc0cfa9273a5b6d35ebdf1beeb773749e1199a0f6c402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f5d95290f15084ede686f64fe8c3d385247674568c1e1d742fc4e1d19dd4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30dedd3741667e4539dbb93fae6bdf7a12469cabc64c281107dc9c1607cc4aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca78cbe511dd1d30e907cb00a8c308083f86e23e2d8418e20c97b1ab78014ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.688171 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.705597 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.726228 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.740991 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.759343 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.781579 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.811139 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:56:04Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:56:04.407691 7217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:56:04.407702 7217 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:56:04.409703 7217 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:56:04.409757 7217 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 16:56:04.409767 7217 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:56:04.409790 7217 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:56:04.409799 7217 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:56:04.409801 7217 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:56:04.409825 7217 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:56:04.409841 7217 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:56:04.409885 7217 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:56:04.409900 7217 factory.go:656] Stopping watch factory\\\\nI0227 16:56:04.409925 7217 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.870933 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.871003 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.871024 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.871104 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.871146 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:05Z","lastTransitionTime":"2026-02-27T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.891467 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.895825 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.895911 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.895933 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.895963 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.895981 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:05Z","lastTransitionTime":"2026-02-27T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.915539 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.920361 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.920406 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.920424 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.920449 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.920465 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:05Z","lastTransitionTime":"2026-02-27T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.938694 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.943608 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.943657 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.943673 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.943694 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.943712 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:05Z","lastTransitionTime":"2026-02-27T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.963295 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.967841 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.967935 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.967955 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.967980 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:05 crc kubenswrapper[4708]: I0227 16:56:05.967998 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:05Z","lastTransitionTime":"2026-02-27T16:56:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.987035 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:05Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:05Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:05 crc kubenswrapper[4708]: E0227 16:56:05.987269 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:56:06 crc kubenswrapper[4708]: I0227 16:56:06.228287 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:06 crc kubenswrapper[4708]: E0227 16:56:06.228481 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:06 crc kubenswrapper[4708]: I0227 16:56:06.449361 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/3.log" Feb 27 16:56:07 crc kubenswrapper[4708]: I0227 16:56:07.227779 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:07 crc kubenswrapper[4708]: I0227 16:56:07.227784 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:07 crc kubenswrapper[4708]: E0227 16:56:07.227982 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:07 crc kubenswrapper[4708]: E0227 16:56:07.228146 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:07 crc kubenswrapper[4708]: I0227 16:56:07.228460 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:07 crc kubenswrapper[4708]: E0227 16:56:07.229315 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:07 crc kubenswrapper[4708]: E0227 16:56:07.332319 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:08 crc kubenswrapper[4708]: I0227 16:56:08.227604 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:08 crc kubenswrapper[4708]: E0227 16:56:08.228155 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.019213 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.019297 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.019354 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019447 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019517 4708 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019583 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019522 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019624 4708 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019638 4708 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019648 4708 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019603 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.019580898 +0000 UTC m=+231.535378525 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019758 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.019730922 +0000 UTC m=+231.535528539 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.019780 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.019768653 +0000 UTC m=+231.535566270 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.119753 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.120083 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.120062551 +0000 UTC m=+231.635860178 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.221202 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.221399 4708 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.221487 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.221464881 +0000 UTC m=+231.737262508 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.228180 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.228236 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.228394 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.228487 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.228596 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.228842 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:09 crc kubenswrapper[4708]: I0227 16:56:09.322146 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.322329 4708 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:56:09 crc kubenswrapper[4708]: E0227 16:56:09.322407 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs podName:79b58c0b-8d12-4391-999c-9689f9488f46 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.322382457 +0000 UTC m=+231.838180084 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs") pod "network-metrics-daemon-4t52p" (UID: "79b58c0b-8d12-4391-999c-9689f9488f46") : object "openshift-multus"/"metrics-daemon-secret" not registered Feb 27 16:56:10 crc kubenswrapper[4708]: I0227 16:56:10.227913 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:10 crc kubenswrapper[4708]: E0227 16:56:10.228475 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:11 crc kubenswrapper[4708]: I0227 16:56:11.243305 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:11 crc kubenswrapper[4708]: I0227 16:56:11.243359 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:11 crc kubenswrapper[4708]: I0227 16:56:11.243326 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:11 crc kubenswrapper[4708]: E0227 16:56:11.243533 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:11 crc kubenswrapper[4708]: E0227 16:56:11.243719 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:11 crc kubenswrapper[4708]: E0227 16:56:11.243812 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.227938 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:12 crc kubenswrapper[4708]: E0227 16:56:12.229199 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.247231 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.267547 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.287944 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.306950 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.322150 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: E0227 16:56:12.333921 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.343341 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.366630 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.388615 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.418364 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5e7806063d0cbd9f1ec5fac4e86be4b54ff56d9b231675f1a67015e17eca670a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:34Z\\\",\\\"message\\\":\\\"perator-webhook\\\\\\\"}, Opts:services.LBOpts{Reject:true, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{services.LBRule{Source:services.Addr{IP:\\\\\\\"10.217.5.254\\\\\\\", Port:443, Template:(*services.Template)(nil)}, Targets:[]services.Addr{}}}, Templates:services.TemplateMap(nil), Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}\\\\nI0227 16:55:34.436115 6939 services_controller.go:452] Built service openshift-machine-api/machine-api-operator-webhook per-node LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436130 6939 services_controller.go:453] Built service openshift-machine-api/machine-api-operator-webhook template LB for network=default: []services.LB{}\\\\nI0227 16:55:34.436139 6939 services_controller.go:445] Built service openshift-marketplace/redhat-marketplace LB template configs for network=default: []services.lbConfig(nil)\\\\nI0227 16:55:34.436144 6939 services_controller.go:454] Service openshift-machine-api/machine-api-operator-webhook for network=default has 1 cluster-wide, 0 per-node configs, 0 template configs, making 1 (cluster) 0 (per node) and 0 (template) load balancers\\\\nF0227 16:55:34.436097 6939 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:56:04Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:56:04.407691 7217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:56:04.407702 7217 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:56:04.409703 7217 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:56:04.409757 7217 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 16:56:04.409767 7217 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:56:04.409790 7217 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:56:04.409799 7217 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:56:04.409801 7217 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:56:04.409825 7217 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:56:04.409841 7217 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:56:04.409885 7217 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:56:04.409900 7217 factory.go:656] Stopping watch factory\\\\nI0227 16:56:04.409925 7217 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:56:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.449782 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d819db14-697e-4d3e-91db-99528c22f079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db933753890a81185cb51437c74fb549f424d32b14f82bfc23c65c1f03656ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2dfb10bb5fd1ae500cc0cfa9273a5b6d35ebdf1beeb773749e1199a0f6c402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f5d95290f15084ede686f64fe8c3d385247674568c1e1d742fc4e1d19dd4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30dedd3741667e4539dbb93fae6bdf7a12469cabc64c281107dc9c1607cc4aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca78cbe511dd1d30e907cb00a8c308083f86e23e2d8418e20c97b1ab78014ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.467466 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.507344 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.534069 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.557666 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.570548 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6988ce3f-79a5-4af0-974b-11bf78a0eae1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db669ad6a213f4d2cc324a27db72c0acd31a31110041ec13a3d5f814ec8824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.586087 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.604310 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.642069 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:12 crc kubenswrapper[4708]: I0227 16:56:12.655342 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:12Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:13 crc kubenswrapper[4708]: I0227 16:56:13.227699 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:13 crc kubenswrapper[4708]: I0227 16:56:13.227734 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:13 crc kubenswrapper[4708]: I0227 16:56:13.227808 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:13 crc kubenswrapper[4708]: E0227 16:56:13.228615 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:13 crc kubenswrapper[4708]: E0227 16:56:13.228993 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:13 crc kubenswrapper[4708]: E0227 16:56:13.235734 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:14 crc kubenswrapper[4708]: I0227 16:56:14.228173 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:14 crc kubenswrapper[4708]: E0227 16:56:14.228361 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:15 crc kubenswrapper[4708]: I0227 16:56:15.228032 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:15 crc kubenswrapper[4708]: E0227 16:56:15.228525 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:15 crc kubenswrapper[4708]: I0227 16:56:15.228150 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:15 crc kubenswrapper[4708]: E0227 16:56:15.229505 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:15 crc kubenswrapper[4708]: I0227 16:56:15.228082 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:15 crc kubenswrapper[4708]: E0227 16:56:15.229884 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.122012 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.122066 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.122084 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.122107 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.122125 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:16Z","lastTransitionTime":"2026-02-27T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:16 crc kubenswrapper[4708]: E0227 16:56:16.142054 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.146825 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.146939 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.146961 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.146994 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.147019 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:16Z","lastTransitionTime":"2026-02-27T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:16 crc kubenswrapper[4708]: E0227 16:56:16.167057 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.171504 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.171551 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.171569 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.171590 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.171632 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:16Z","lastTransitionTime":"2026-02-27T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:16 crc kubenswrapper[4708]: E0227 16:56:16.190793 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.195403 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.195627 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.195760 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.195971 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.196115 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:16Z","lastTransitionTime":"2026-02-27T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:16 crc kubenswrapper[4708]: E0227 16:56:16.217097 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.222427 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.222487 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.222505 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.222531 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.222551 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:16Z","lastTransitionTime":"2026-02-27T16:56:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:16 crc kubenswrapper[4708]: I0227 16:56:16.227642 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:16 crc kubenswrapper[4708]: E0227 16:56:16.227822 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:16 crc kubenswrapper[4708]: E0227 16:56:16.245202 4708 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-02-27T16:56:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"ab7c2cd5-c0bb-486f-8dae-402228064a6a\\\",\\\"systemUUID\\\":\\\"b0138667-dee2-429c-83f0-feff19c38749\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:16Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:16 crc kubenswrapper[4708]: E0227 16:56:16.246025 4708 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.227630 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.227679 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.227833 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:17 crc kubenswrapper[4708]: E0227 16:56:17.228020 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:17 crc kubenswrapper[4708]: E0227 16:56:17.228735 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:17 crc kubenswrapper[4708]: E0227 16:56:17.228994 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.229223 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 16:56:17 crc kubenswrapper[4708]: E0227 16:56:17.229550 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.247767 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.267754 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.287211 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.306768 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.323039 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: E0227 16:56:17.334994 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.336535 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.355566 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.373053 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.404052 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:56:04Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:56:04.407691 7217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:56:04.407702 7217 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:56:04.409703 7217 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:56:04.409757 7217 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 16:56:04.409767 7217 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:56:04.409790 7217 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:56:04.409799 7217 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:56:04.409801 7217 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:56:04.409825 7217 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:56:04.409841 7217 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:56:04.409885 7217 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:56:04.409900 7217 factory.go:656] Stopping watch factory\\\\nI0227 16:56:04.409925 7217 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.438266 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d819db14-697e-4d3e-91db-99528c22f079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db933753890a81185cb51437c74fb549f424d32b14f82bfc23c65c1f03656ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2dfb10bb5fd1ae500cc0cfa9273a5b6d35ebdf1beeb773749e1199a0f6c402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f5d95290f15084ede686f64fe8c3d385247674568c1e1d742fc4e1d19dd4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30dedd3741667e4539dbb93fae6bdf7a12469cabc64c281107dc9c1607cc4aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca78cbe511dd1d30e907cb00a8c308083f86e23e2d8418e20c97b1ab78014ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.456538 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.476311 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.494761 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.512706 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.528446 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6988ce3f-79a5-4af0-974b-11bf78a0eae1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db669ad6a213f4d2cc324a27db72c0acd31a31110041ec13a3d5f814ec8824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.545134 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.566259 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.584525 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:17 crc kubenswrapper[4708]: I0227 16:56:17.600346 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:17Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:18 crc kubenswrapper[4708]: I0227 16:56:18.227904 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:18 crc kubenswrapper[4708]: E0227 16:56:18.228110 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:19 crc kubenswrapper[4708]: I0227 16:56:19.228024 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:19 crc kubenswrapper[4708]: I0227 16:56:19.228111 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:19 crc kubenswrapper[4708]: I0227 16:56:19.228034 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:19 crc kubenswrapper[4708]: E0227 16:56:19.228242 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:19 crc kubenswrapper[4708]: E0227 16:56:19.228378 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:19 crc kubenswrapper[4708]: E0227 16:56:19.228507 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:20 crc kubenswrapper[4708]: I0227 16:56:20.227424 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:20 crc kubenswrapper[4708]: E0227 16:56:20.227651 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:21 crc kubenswrapper[4708]: I0227 16:56:21.227549 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:21 crc kubenswrapper[4708]: I0227 16:56:21.227614 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:21 crc kubenswrapper[4708]: E0227 16:56:21.227735 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:21 crc kubenswrapper[4708]: I0227 16:56:21.228043 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:21 crc kubenswrapper[4708]: E0227 16:56:21.228155 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:21 crc kubenswrapper[4708]: E0227 16:56:21.228381 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.228431 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:22 crc kubenswrapper[4708]: E0227 16:56:22.228619 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.251545 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cc59a3b8a13961251289f4251e1cab11a028da9a6a4527b751f285eaa958e5a0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.272495 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e323b0165804600f8d27fd5a47540e277eb99bc51b514af25fc6e454de639e38\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://519b8785898ad96d2b3c319305f2bde0b454e93fa458c013b4e2a5ce4c9de144\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.290323 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-9s7tp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ca723997-3668-4afc-afdf-64ae7404b8ba\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a6ab3fdc8bbd2bb793a2561da4c2c075a09f8a12450319d6286231b491f8c960\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-g6ftv\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-9s7tp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.307619 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5763b282-e978-499f-a8e2-5b7ed78d691e\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://068663336fd4ff628042b7a56abeefc628b7ed3b14dccf79ded35ea45b5d5124\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5bc1b7b28b51fd3fa214cd405d8451e1c7c52b956acfd3b7a01d76c192aaaf3e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-c4qbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-blgrz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.328577 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fea95355-6e4d-4dce-9a8d-ca17abd51b4f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a3067281d4749e3c97dd9ad4ce5efc056a553a671469984ed33f916957589d9f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ec36ab003db6114225ea991b8b01f15e44596f6f382ce4ee3c4ecb0362c16aed\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:53:48Z\\\",\\\"message\\\":\\\"+ timeout 3m /bin/bash -exuo pipefail -c 'while [ -n \\\\\\\"$(ss -Htanop \\\\\\\\( sport = 10357 \\\\\\\\))\\\\\\\" ]; do sleep 1; done'\\\\n++ ss -Htanop '(' sport = 10357 ')'\\\\n+ '[' -n '' ']'\\\\n+ exec cluster-policy-controller start --config=/etc/kubernetes/static-pod-resources/configmaps/cluster-policy-controller-config/config.yaml --kubeconfig=/etc/kubernetes/static-pod-resources/configmaps/controller-manager-kubeconfig/kubeconfig --namespace=openshift-kube-controller-manager -v=2\\\\nI0227 16:53:25.012823 1 leaderelection.go:121] The leader election gives 4 retries and allows for 30s of clock skew. The kube-apiserver downtime tolerance is 78s. Worst non-graceful lease acquisition is 2m43s. Worst graceful lease acquisition is {26s}.\\\\nI0227 16:53:25.015692 1 observer_polling.go:159] Starting file observer\\\\nI0227 16:53:25.093957 1 builder.go:298] cluster-policy-controller version 4.18.0-202501230001.p0.g5fd8525.assembly.stream.el9-5fd8525-5fd852525909ce6eab52972ba9ce8fcf56528eb9\\\\nI0227 16:53:25.107445 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.crt::/etc/kubernetes/static-pod-resources/secrets/serving-cert/tls.key\\\\\\\"\\\\nI0227 16:53:48.942505 1 cmd.go:138] Received SIGTERM or SIGINT signal, shutting down controller.\\\\nF0227 16:53:48.942643 1 cmd.go:179] failed checking apiserver connectivity: Get \\\\\\\"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/openshift-kube-controller-manager/leases/cluster-policy-controller-lock\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:53:48Z is after 2026-02-23T05:33:13Z\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:49Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d3e786a7c20583b9fc0867b8e3f6146ffde7670f741390f58b0f9bca97621ef7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9e28f9647f382c2b9b8b2d1bc121e21b04576b2c3095b6061c1c4ec2dc469c43\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: E0227 16:56:22.336009 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.349477 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5eae5a471a3dfc62c4f93a6cde34c6779c3d270d95f7236247fa6cbb8d3ab51\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.370801 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.389648 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.406834 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-hz8lb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de7f119-b85b-44ae-a478-443eca219825\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cf7031acd4825cf8b42068b2f3add29a64305e0547b0029ef5f94427ee3cafc4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xv98f\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-hz8lb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.427761 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-p6n6j" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2c5353a5-c388-4046-bb29-8e73352588c2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:55:53Z\\\",\\\"message\\\":\\\"2026-02-27T16:55:07+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a\\\\n2026-02-27T16:55:07+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_ee114728-1ed0-4762-a78c-abee4127243a to /host/opt/cni/bin/\\\\n2026-02-27T16:55:07Z [verbose] multus-daemon started\\\\n2026-02-27T16:55:07Z [verbose] Readiness Indicator file check\\\\n2026-02-27T16:55:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vtx74\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-p6n6j\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.452800 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-bp77l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9679eeaa-b9db-4a4a-a4aa-ec0f5e3ecd6b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e6d8526cff9491cfaa9b90510f7a012c67672e8b7354e95dc86a3c941ea26859\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa4886a61ca21e5d74e20201bc7a1dd06f87f1fb15982ae908afafbea232d992\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://23ef12ec1e974cd87a25a34f51051b5baa0462ad6e0758c25a6be4f7abdedc82\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://399db8adc0e893b70613006071c9e1a6482e87e2e493238554d49e3c7bf72ca4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d679e13e6d4cbe6bae25170ad3665102362687e3a2daea4e886b8a82275e6617\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3341cacc2bf16c3cd7963ba48f7fcc8d792136d5727ead35e11a8d70bf5aafb1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e34efd962db2da4772fb8efcb4e9b2cf4deff05aedb7ffc0495077041bfc9641\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-k2mvm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-bp77l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.483077 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"7efaba13-6a00-4f12-9e83-5a66a2246554\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-02-27T16:56:04Z\\\",\\\"message\\\":\\\"nshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0227 16:56:04.407691 7217 reflector.go:311] Stopping reflector *v1.Pod (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0227 16:56:04.407702 7217 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0227 16:56:04.409703 7217 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0227 16:56:04.409757 7217 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0227 16:56:04.409767 7217 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0227 16:56:04.409790 7217 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0227 16:56:04.409799 7217 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0227 16:56:04.409801 7217 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0227 16:56:04.409825 7217 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0227 16:56:04.409841 7217 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0227 16:56:04.409885 7217 handler.go:208] Removed *v1.Node event handler 2\\\\nI0227 16:56:04.409900 7217 factory.go:656] Stopping watch factory\\\\nI0227 16:56:04.409925 7217 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:56:03Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dc6tg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-l82mg\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.517223 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d819db14-697e-4d3e-91db-99528c22f079\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:26Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2db933753890a81185cb51437c74fb549f424d32b14f82bfc23c65c1f03656ce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cf2dfb10bb5fd1ae500cc0cfa9273a5b6d35ebdf1beeb773749e1199a0f6c402\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://69f5d95290f15084ede686f64fe8c3d385247674568c1e1d742fc4e1d19dd4e2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://30dedd3741667e4539dbb93fae6bdf7a12469cabc64c281107dc9c1607cc4aa3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://5ca78cbe511dd1d30e907cb00a8c308083f86e23e2d8418e20c97b1ab78014ae\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a4c2a3185db334c61b4ac014fd8671f44dbb10499e45d16adff33e506cea8dd4\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a3d5461a3377cdf67e1093702283cf41c561dc0bbda831667e727ba8b908765f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bd7fed89d7f990df53fa8a32fc10e34bba7f5cf75e6eb1df65b686abfb7d52f2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.536243 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6988ce3f-79a5-4af0-974b-11bf78a0eae1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://15db669ad6a213f4d2cc324a27db72c0acd31a31110041ec13a3d5f814ec8824\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b99649371a89a5edf3755b32cf0e06a97758ae492b02d8de721cea66cb6e4e04\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.553808 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7a63d5ce1e7a668b5334ef917e1af38f54109f762f29813a762cf89954fff907\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zf88c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-kvxg2\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.574577 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8aad6547-9385-4e87-8a50-ef9fa275904e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:54:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://810bee86b148b1e5fdd078a0344b6c096ab5d8d8666c77e2b3fbb79c35c85cef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c2b33e584adf87a59f83bb2f5cd1f2640fda6fbee761f2aa0957ec11a100468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7699d4a917e83fccb6a984da6f39b7d253197c376e0936ea4518ec430088b5e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://56a160707094040a89acc084492a7ba54dedd041a2ec4bd3056ade7c99b2b25d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.593591 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.610664 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-4t52p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"79b58c0b-8d12-4391-999c-9689f9488f46\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:05Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-wgq8g\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:55:05Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-4t52p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:22 crc kubenswrapper[4708]: I0227 16:56:22.633277 4708 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37500f59-8db5-4c44-b24c-5abacbddf26b\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:55:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-02-27T16:53:22Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-02-27T16:54:34Z\\\",\\\"message\\\":\\\"le observer\\\\nW0227 16:54:33.881607 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0227 16:54:33.881787 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0227 16:54:33.882349 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2427579452/tls.crt::/tmp/serving-cert-2427579452/tls.key\\\\\\\"\\\\nI0227 16:54:34.157345 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0227 16:54:34.161839 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0227 16:54:34.161876 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0227 16:54:34.161907 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0227 16:54:34.161914 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0227 16:54:34.167419 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0227 16:54:34.167484 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167492 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0227 16:54:34.167504 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0227 16:54:34.167513 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0227 16:54:34.167520 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0227 16:54:34.167526 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nI0227 16:54:34.167663 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nF0227 16:54:34.168527 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-02-27T16:54:33Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":4,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:55:25Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-02-27T16:53:25Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-02-27T16:53:23Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-02-27T16:53:23Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-02-27T16:53:22Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-02-27T16:56:22Z is after 2025-08-24T17:21:41Z" Feb 27 16:56:23 crc kubenswrapper[4708]: I0227 16:56:23.228319 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:23 crc kubenswrapper[4708]: I0227 16:56:23.228374 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:23 crc kubenswrapper[4708]: I0227 16:56:23.228415 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:23 crc kubenswrapper[4708]: E0227 16:56:23.228520 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:23 crc kubenswrapper[4708]: E0227 16:56:23.228637 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:23 crc kubenswrapper[4708]: E0227 16:56:23.228786 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:24 crc kubenswrapper[4708]: I0227 16:56:24.228379 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:24 crc kubenswrapper[4708]: E0227 16:56:24.228583 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:25 crc kubenswrapper[4708]: I0227 16:56:25.227506 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:25 crc kubenswrapper[4708]: E0227 16:56:25.227713 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:25 crc kubenswrapper[4708]: I0227 16:56:25.227780 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:25 crc kubenswrapper[4708]: I0227 16:56:25.227912 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:25 crc kubenswrapper[4708]: E0227 16:56:25.228247 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:25 crc kubenswrapper[4708]: E0227 16:56:25.228416 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.228383 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:26 crc kubenswrapper[4708]: E0227 16:56:26.228591 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.301987 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.302075 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.302097 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.302118 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.302134 4708 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-02-27T16:56:26Z","lastTransitionTime":"2026-02-27T16:56:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.340062 4708 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.350367 4708 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.379937 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj"] Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.380482 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.383591 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.384018 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.384223 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.387634 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.408691 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=69.408669287 podStartE2EDuration="1m9.408669287s" podCreationTimestamp="2026-02-27 16:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.408236155 +0000 UTC m=+184.924033782" watchObservedRunningTime="2026-02-27 16:56:26.408669287 +0000 UTC m=+184.924466914" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.496724 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.496807 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.496880 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.496970 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.497047 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.503295 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-9s7tp" podStartSLOduration=134.503264871 podStartE2EDuration="2m14.503264871s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.483234424 +0000 UTC m=+184.999032051" watchObservedRunningTime="2026-02-27 16:56:26.503264871 +0000 UTC m=+185.019062498" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.525948 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-blgrz" podStartSLOduration=133.525914183 podStartE2EDuration="2m13.525914183s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.504167967 +0000 UTC m=+185.019965584" watchObservedRunningTime="2026-02-27 16:56:26.525914183 +0000 UTC m=+185.041711810" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.526790 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-p6n6j" podStartSLOduration=134.526773648 podStartE2EDuration="2m14.526773648s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.525688457 +0000 UTC m=+185.041486114" watchObservedRunningTime="2026-02-27 16:56:26.526773648 +0000 UTC m=+185.042571265" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.557359 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-bp77l" podStartSLOduration=133.557329928 podStartE2EDuration="2m13.557329928s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.556611447 +0000 UTC m=+185.072409094" watchObservedRunningTime="2026-02-27 16:56:26.557329928 +0000 UTC m=+185.073127555" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.598393 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.598462 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.598499 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.598551 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.598589 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.598653 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.598721 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.600228 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-service-ca\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.620968 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.634061 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/708cc169-9a9e-4ee1-991c-4302fe7aa0cd-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-mm4gj\" (UID: \"708cc169-9a9e-4ee1-991c-4302fe7aa0cd\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.655282 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=24.655252438 podStartE2EDuration="24.655252438s" podCreationTimestamp="2026-02-27 16:56:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.634640104 +0000 UTC m=+185.150437731" watchObservedRunningTime="2026-02-27 16:56:26.655252438 +0000 UTC m=+185.171050065" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.702676 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.743905 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=48.743840439 podStartE2EDuration="48.743840439s" podCreationTimestamp="2026-02-27 16:55:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.74319953 +0000 UTC m=+185.258997127" watchObservedRunningTime="2026-02-27 16:56:26.743840439 +0000 UTC m=+185.259638066" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.744542 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-hz8lb" podStartSLOduration=134.744528509 podStartE2EDuration="2m14.744528509s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.717813489 +0000 UTC m=+185.233611116" watchObservedRunningTime="2026-02-27 16:56:26.744528509 +0000 UTC m=+185.260326136" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.780416 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.780395941 podStartE2EDuration="25.780395941s" podCreationTimestamp="2026-02-27 16:56:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.76160643 +0000 UTC m=+185.277404057" watchObservedRunningTime="2026-02-27 16:56:26.780395941 +0000 UTC m=+185.296193568" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.802843 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=73.802818607 podStartE2EDuration="1m13.802818607s" podCreationTimestamp="2026-02-27 16:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.801735096 +0000 UTC m=+185.317532693" watchObservedRunningTime="2026-02-27 16:56:26.802818607 +0000 UTC m=+185.318616234" Feb 27 16:56:26 crc kubenswrapper[4708]: I0227 16:56:26.803061 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podStartSLOduration=134.803053034 podStartE2EDuration="2m14.803053034s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:26.781819272 +0000 UTC m=+185.297616899" watchObservedRunningTime="2026-02-27 16:56:26.803053034 +0000 UTC m=+185.318850661" Feb 27 16:56:27 crc kubenswrapper[4708]: I0227 16:56:27.227564 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:27 crc kubenswrapper[4708]: I0227 16:56:27.227672 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:27 crc kubenswrapper[4708]: I0227 16:56:27.228652 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:27 crc kubenswrapper[4708]: E0227 16:56:27.228799 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:27 crc kubenswrapper[4708]: E0227 16:56:27.229154 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:27 crc kubenswrapper[4708]: E0227 16:56:27.229309 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:27 crc kubenswrapper[4708]: E0227 16:56:27.337928 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:27 crc kubenswrapper[4708]: I0227 16:56:27.539041 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" event={"ID":"708cc169-9a9e-4ee1-991c-4302fe7aa0cd","Type":"ContainerStarted","Data":"f2f1d11d615fd53876f9c16b5b2fe622450e485af715ed793724d70fbd1ba2f7"} Feb 27 16:56:27 crc kubenswrapper[4708]: I0227 16:56:27.539143 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" event={"ID":"708cc169-9a9e-4ee1-991c-4302fe7aa0cd","Type":"ContainerStarted","Data":"f2bd3252151e9cf7bca57b3834db77d84d836b01aa1c467124a3568a6d42a4ce"} Feb 27 16:56:27 crc kubenswrapper[4708]: I0227 16:56:27.558689 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-mm4gj" podStartSLOduration=135.558666713 podStartE2EDuration="2m15.558666713s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:27.557826119 +0000 UTC m=+186.073623756" watchObservedRunningTime="2026-02-27 16:56:27.558666713 +0000 UTC m=+186.074464330" Feb 27 16:56:28 crc kubenswrapper[4708]: I0227 16:56:28.228367 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:28 crc kubenswrapper[4708]: E0227 16:56:28.228550 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:29 crc kubenswrapper[4708]: I0227 16:56:29.228167 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:29 crc kubenswrapper[4708]: I0227 16:56:29.228276 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:29 crc kubenswrapper[4708]: I0227 16:56:29.228317 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:29 crc kubenswrapper[4708]: E0227 16:56:29.228387 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:29 crc kubenswrapper[4708]: E0227 16:56:29.228497 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:29 crc kubenswrapper[4708]: E0227 16:56:29.228621 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:30 crc kubenswrapper[4708]: I0227 16:56:30.227839 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:30 crc kubenswrapper[4708]: E0227 16:56:30.228474 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:31 crc kubenswrapper[4708]: I0227 16:56:31.227654 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:31 crc kubenswrapper[4708]: E0227 16:56:31.228448 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:31 crc kubenswrapper[4708]: I0227 16:56:31.227678 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:31 crc kubenswrapper[4708]: E0227 16:56:31.228713 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:31 crc kubenswrapper[4708]: I0227 16:56:31.227651 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:31 crc kubenswrapper[4708]: E0227 16:56:31.228956 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:32 crc kubenswrapper[4708]: I0227 16:56:32.227932 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:32 crc kubenswrapper[4708]: E0227 16:56:32.230007 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:32 crc kubenswrapper[4708]: I0227 16:56:32.231214 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 16:56:32 crc kubenswrapper[4708]: E0227 16:56:32.231616 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-l82mg_openshift-ovn-kubernetes(7efaba13-6a00-4f12-9e83-5a66a2246554)\"" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" Feb 27 16:56:32 crc kubenswrapper[4708]: E0227 16:56:32.338640 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:33 crc kubenswrapper[4708]: I0227 16:56:33.227977 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:33 crc kubenswrapper[4708]: I0227 16:56:33.228020 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:33 crc kubenswrapper[4708]: I0227 16:56:33.228003 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:33 crc kubenswrapper[4708]: E0227 16:56:33.228180 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:33 crc kubenswrapper[4708]: E0227 16:56:33.228301 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:33 crc kubenswrapper[4708]: E0227 16:56:33.228433 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:34 crc kubenswrapper[4708]: I0227 16:56:34.228075 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:34 crc kubenswrapper[4708]: E0227 16:56:34.228580 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:35 crc kubenswrapper[4708]: I0227 16:56:35.228281 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:35 crc kubenswrapper[4708]: I0227 16:56:35.228349 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:35 crc kubenswrapper[4708]: E0227 16:56:35.228469 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:35 crc kubenswrapper[4708]: E0227 16:56:35.228609 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:35 crc kubenswrapper[4708]: I0227 16:56:35.229091 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:35 crc kubenswrapper[4708]: E0227 16:56:35.229299 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:36 crc kubenswrapper[4708]: I0227 16:56:36.227883 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:36 crc kubenswrapper[4708]: E0227 16:56:36.228088 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:37 crc kubenswrapper[4708]: I0227 16:56:37.227397 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:37 crc kubenswrapper[4708]: I0227 16:56:37.227476 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:37 crc kubenswrapper[4708]: I0227 16:56:37.227417 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:37 crc kubenswrapper[4708]: E0227 16:56:37.227613 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:37 crc kubenswrapper[4708]: E0227 16:56:37.227770 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:37 crc kubenswrapper[4708]: E0227 16:56:37.227895 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:37 crc kubenswrapper[4708]: E0227 16:56:37.340422 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:38 crc kubenswrapper[4708]: I0227 16:56:38.227447 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:38 crc kubenswrapper[4708]: E0227 16:56:38.227765 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.228093 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.228149 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:39 crc kubenswrapper[4708]: E0227 16:56:39.228762 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:39 crc kubenswrapper[4708]: E0227 16:56:39.228936 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.228170 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:39 crc kubenswrapper[4708]: E0227 16:56:39.229181 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.674898 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/1.log" Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.675836 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/0.log" Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.675940 4708 generic.go:334] "Generic (PLEG): container finished" podID="2c5353a5-c388-4046-bb29-8e73352588c2" containerID="ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7" exitCode=1 Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.675993 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerDied","Data":"ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7"} Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.676088 4708 scope.go:117] "RemoveContainer" containerID="74495f438ba766d17d9b14c7aa0b4c142ce30c70141c9c148176374003f44ca0" Feb 27 16:56:39 crc kubenswrapper[4708]: I0227 16:56:39.676649 4708 scope.go:117] "RemoveContainer" containerID="ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7" Feb 27 16:56:39 crc kubenswrapper[4708]: E0227 16:56:39.676996 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-p6n6j_openshift-multus(2c5353a5-c388-4046-bb29-8e73352588c2)\"" pod="openshift-multus/multus-p6n6j" podUID="2c5353a5-c388-4046-bb29-8e73352588c2" Feb 27 16:56:40 crc kubenswrapper[4708]: I0227 16:56:40.228375 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:40 crc kubenswrapper[4708]: E0227 16:56:40.228564 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:40 crc kubenswrapper[4708]: I0227 16:56:40.680977 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/1.log" Feb 27 16:56:41 crc kubenswrapper[4708]: I0227 16:56:41.227359 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:41 crc kubenswrapper[4708]: I0227 16:56:41.227359 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:41 crc kubenswrapper[4708]: E0227 16:56:41.227524 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:41 crc kubenswrapper[4708]: E0227 16:56:41.227662 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:41 crc kubenswrapper[4708]: I0227 16:56:41.228007 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:41 crc kubenswrapper[4708]: E0227 16:56:41.228248 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:42 crc kubenswrapper[4708]: I0227 16:56:42.227948 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:42 crc kubenswrapper[4708]: E0227 16:56:42.229771 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:42 crc kubenswrapper[4708]: E0227 16:56:42.341092 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:43 crc kubenswrapper[4708]: I0227 16:56:43.228108 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:43 crc kubenswrapper[4708]: I0227 16:56:43.228115 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:43 crc kubenswrapper[4708]: E0227 16:56:43.229147 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:43 crc kubenswrapper[4708]: E0227 16:56:43.229291 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:43 crc kubenswrapper[4708]: I0227 16:56:43.228171 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:43 crc kubenswrapper[4708]: E0227 16:56:43.229450 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:44 crc kubenswrapper[4708]: I0227 16:56:44.227641 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:44 crc kubenswrapper[4708]: E0227 16:56:44.227821 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:45 crc kubenswrapper[4708]: I0227 16:56:45.228113 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:45 crc kubenswrapper[4708]: I0227 16:56:45.228243 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:45 crc kubenswrapper[4708]: I0227 16:56:45.228596 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:45 crc kubenswrapper[4708]: E0227 16:56:45.228750 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:45 crc kubenswrapper[4708]: E0227 16:56:45.228985 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:45 crc kubenswrapper[4708]: E0227 16:56:45.229138 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:46 crc kubenswrapper[4708]: I0227 16:56:46.227700 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:46 crc kubenswrapper[4708]: E0227 16:56:46.227922 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.228012 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.228037 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.228089 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:47 crc kubenswrapper[4708]: E0227 16:56:47.229187 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:47 crc kubenswrapper[4708]: E0227 16:56:47.229151 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:47 crc kubenswrapper[4708]: E0227 16:56:47.229346 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.229505 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 16:56:47 crc kubenswrapper[4708]: E0227 16:56:47.342470 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.719964 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/3.log" Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.724292 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerStarted","Data":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.724967 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:56:47 crc kubenswrapper[4708]: I0227 16:56:47.771557 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podStartSLOduration=154.771536082 podStartE2EDuration="2m34.771536082s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:56:47.770178356 +0000 UTC m=+206.285975953" watchObservedRunningTime="2026-02-27 16:56:47.771536082 +0000 UTC m=+206.287333679" Feb 27 16:56:48 crc kubenswrapper[4708]: I0227 16:56:48.227927 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:48 crc kubenswrapper[4708]: E0227 16:56:48.228139 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:48 crc kubenswrapper[4708]: I0227 16:56:48.275226 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4t52p"] Feb 27 16:56:48 crc kubenswrapper[4708]: I0227 16:56:48.275295 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:48 crc kubenswrapper[4708]: E0227 16:56:48.275375 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:49 crc kubenswrapper[4708]: I0227 16:56:49.227542 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:49 crc kubenswrapper[4708]: I0227 16:56:49.227601 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:49 crc kubenswrapper[4708]: E0227 16:56:49.228273 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:49 crc kubenswrapper[4708]: E0227 16:56:49.228082 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:50 crc kubenswrapper[4708]: I0227 16:56:50.228061 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:50 crc kubenswrapper[4708]: I0227 16:56:50.228238 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:50 crc kubenswrapper[4708]: E0227 16:56:50.228436 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:50 crc kubenswrapper[4708]: E0227 16:56:50.228787 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:51 crc kubenswrapper[4708]: I0227 16:56:51.227338 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:51 crc kubenswrapper[4708]: I0227 16:56:51.227339 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:51 crc kubenswrapper[4708]: E0227 16:56:51.227575 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:51 crc kubenswrapper[4708]: E0227 16:56:51.227722 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:52 crc kubenswrapper[4708]: I0227 16:56:52.227585 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:52 crc kubenswrapper[4708]: E0227 16:56:52.229429 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:52 crc kubenswrapper[4708]: I0227 16:56:52.229537 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:52 crc kubenswrapper[4708]: E0227 16:56:52.229747 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:52 crc kubenswrapper[4708]: E0227 16:56:52.343348 4708 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 16:56:53 crc kubenswrapper[4708]: I0227 16:56:53.227388 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:53 crc kubenswrapper[4708]: E0227 16:56:53.227559 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:53 crc kubenswrapper[4708]: I0227 16:56:53.227805 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:53 crc kubenswrapper[4708]: I0227 16:56:53.228225 4708 scope.go:117] "RemoveContainer" containerID="ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7" Feb 27 16:56:53 crc kubenswrapper[4708]: E0227 16:56:53.228320 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:53 crc kubenswrapper[4708]: I0227 16:56:53.747582 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/1.log" Feb 27 16:56:53 crc kubenswrapper[4708]: I0227 16:56:53.747677 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerStarted","Data":"55659c02564f28b8a0ba82f59d00103ed6e35b22ac47d4fc894c18e3333ba85f"} Feb 27 16:56:54 crc kubenswrapper[4708]: I0227 16:56:54.227275 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:54 crc kubenswrapper[4708]: I0227 16:56:54.227274 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:54 crc kubenswrapper[4708]: E0227 16:56:54.227489 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:54 crc kubenswrapper[4708]: E0227 16:56:54.227631 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:55 crc kubenswrapper[4708]: I0227 16:56:55.227642 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:55 crc kubenswrapper[4708]: I0227 16:56:55.227647 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:55 crc kubenswrapper[4708]: E0227 16:56:55.228191 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:55 crc kubenswrapper[4708]: E0227 16:56:55.228328 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:56 crc kubenswrapper[4708]: I0227 16:56:56.228044 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:56 crc kubenswrapper[4708]: I0227 16:56:56.228132 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:56 crc kubenswrapper[4708]: E0227 16:56:56.228221 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Feb 27 16:56:56 crc kubenswrapper[4708]: E0227 16:56:56.228341 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-4t52p" podUID="79b58c0b-8d12-4391-999c-9689f9488f46" Feb 27 16:56:57 crc kubenswrapper[4708]: I0227 16:56:57.227488 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:57 crc kubenswrapper[4708]: I0227 16:56:57.227490 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:57 crc kubenswrapper[4708]: E0227 16:56:57.227685 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Feb 27 16:56:57 crc kubenswrapper[4708]: E0227 16:56:57.227921 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Feb 27 16:56:58 crc kubenswrapper[4708]: I0227 16:56:58.227387 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:56:58 crc kubenswrapper[4708]: I0227 16:56:58.227400 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:56:58 crc kubenswrapper[4708]: I0227 16:56:58.231592 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 16:56:58 crc kubenswrapper[4708]: I0227 16:56:58.232244 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 16:56:58 crc kubenswrapper[4708]: I0227 16:56:58.232422 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 16:56:58 crc kubenswrapper[4708]: I0227 16:56:58.232680 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 16:56:59 crc kubenswrapper[4708]: I0227 16:56:59.227748 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:56:59 crc kubenswrapper[4708]: I0227 16:56:59.227772 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:56:59 crc kubenswrapper[4708]: I0227 16:56:59.230897 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 16:56:59 crc kubenswrapper[4708]: I0227 16:56:59.233373 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 16:57:05 crc kubenswrapper[4708]: I0227 16:57:05.486025 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 16:57:05 crc kubenswrapper[4708]: I0227 16:57:05.631908 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:57:05 crc kubenswrapper[4708]: I0227 16:57:05.632010 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.853397 4708 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.945620 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7z7j"] Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.946502 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.946942 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-cl8l9"] Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.947597 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.948290 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s45vs"] Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.949063 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.951101 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm"] Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.951945 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.952272 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q7prd"] Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.952813 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.953707 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb"] Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.960800 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.961490 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.970070 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.972455 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.972877 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.975649 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.980640 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq"] Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.982506 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.983313 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 16:57:06 crc kubenswrapper[4708]: I0227 16:57:06.983873 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.014912 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gnspz"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.015250 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.015714 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.016058 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.016077 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.018196 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.018350 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.018466 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.019159 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.019326 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.019781 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.020146 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.020483 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.020736 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.022576 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.023179 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km9ss"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.023751 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-55dsj"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.023773 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.024128 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.024262 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.024774 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.032917 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.033336 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.033625 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034029 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034053 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034090 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034158 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034202 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034260 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034278 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034322 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034208 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034422 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034529 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034640 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034757 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034896 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.034034 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035078 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035217 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035354 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035456 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035489 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035604 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035723 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035841 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.035966 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.036082 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.036255 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.036292 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.036349 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.036636 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.036830 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.036836 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.037006 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.037152 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.037303 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.037425 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.037545 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.037560 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.038016 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.038110 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.039875 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.040645 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.041024 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.041597 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.046467 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-bhsw7"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.047106 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bhsw7" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.050105 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.051548 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.051785 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.058866 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.060505 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.071566 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.072364 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.072739 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.072828 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.072881 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.073015 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.073205 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.073739 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.073782 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.073950 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074008 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074033 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074157 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074302 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074316 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074577 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074735 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.074870 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.075067 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.075561 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.075593 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.075896 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.076705 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.077086 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.077381 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.077698 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.077912 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.080110 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.085629 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086133 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086183 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-dir\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086212 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/26d12a6e-d830-4357-b372-9163d663448f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086301 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25r6m\" (UniqueName: \"kubernetes.io/projected/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-kube-api-access-25r6m\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086329 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086358 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnphk\" (UniqueName: \"kubernetes.io/projected/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-kube-api-access-hnphk\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086383 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-audit-dir\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086409 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086459 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086501 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct7p7\" (UniqueName: \"kubernetes.io/projected/b5731e3c-f903-4516-8c08-43113e79a4ba-kube-api-access-ct7p7\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086524 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086545 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-audit-policies\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.086578 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-config\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.087688 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.089755 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.092762 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bd8t\" (UniqueName: \"kubernetes.io/projected/855a2824-4e4a-4eae-9e71-3bc0db42f169-kube-api-access-5bd8t\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.112625 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.114346 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.114906 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nl25w"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.115296 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.115554 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116027 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116040 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116120 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116230 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116638 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/855a2824-4e4a-4eae-9e71-3bc0db42f169-serving-cert\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116675 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-serving-cert\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116696 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-oauth-config\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116723 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-config\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116746 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b5731e3c-f903-4516-8c08-43113e79a4ba-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116763 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-image-import-ca\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116779 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwgt9\" (UniqueName: \"kubernetes.io/projected/b710111d-81c5-463d-b2ea-f7f3f5e27b90-kube-api-access-fwgt9\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116795 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-service-ca\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116828 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b710111d-81c5-463d-b2ea-f7f3f5e27b90-serving-cert\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116856 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116875 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-client-ca\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116898 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-service-ca-bundle\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116918 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-etcd-client\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116942 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-config\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116954 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116959 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-service-ca\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116977 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-trusted-ca-bundle\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.116991 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-audit-dir\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117005 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-config\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117021 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqrs8\" (UniqueName: \"kubernetes.io/projected/3de1e003-2dee-4d76-86cd-cd60680535bd-kube-api-access-qqrs8\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117037 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117054 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117080 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-config\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117103 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-oauth-serving-cert\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117128 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd8cj\" (UniqueName: \"kubernetes.io/projected/26d12a6e-d830-4357-b372-9163d663448f-kube-api-access-vd8cj\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117143 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117161 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-encryption-config\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.117178 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118219 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-client-ca\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118258 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-config\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118276 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118294 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-node-pullsecrets\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118321 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118336 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26d12a6e-d830-4357-b372-9163d663448f-images\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118352 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-serving-cert\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118372 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118385 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-audit\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118399 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-kube-api-access-nqcks\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118417 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118434 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118448 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-client\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118468 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118487 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-ca\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118502 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-etcd-client\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118516 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-encryption-config\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118529 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118545 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-policies\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118560 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzrrm\" (UniqueName: \"kubernetes.io/projected/0951d4d1-034f-4968-b8ca-a5016d5b38d6-kube-api-access-kzrrm\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118576 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5731e3c-f903-4516-8c08-43113e79a4ba-serving-cert\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118592 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d12a6e-d830-4357-b372-9163d663448f-config\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118606 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-serving-cert\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118621 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e664390-b33c-4aa5-972c-732c8ca37fda-serving-cert\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118638 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bflbq\" (UniqueName: \"kubernetes.io/projected/3e664390-b33c-4aa5-972c-732c8ca37fda-kube-api-access-bflbq\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118654 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118684 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118700 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5pwt\" (UniqueName: \"kubernetes.io/projected/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-kube-api-access-n5pwt\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.118721 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951d4d1-034f-4968-b8ca-a5016d5b38d6-serving-cert\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.119394 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.119534 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.119738 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.120671 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.121988 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.125360 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.127046 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-m8pjn"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.127790 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.129948 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.130703 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.131269 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.131924 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.134060 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.136365 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.136715 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.137744 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.138636 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.139277 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.139387 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.139924 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.140038 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.140441 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.141164 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.143282 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.143895 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.144513 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kfxs6"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.144987 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.145354 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.146409 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.146856 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536856-lj688"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.147578 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-lj688" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.149094 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89q5w"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.149330 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.150134 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.150525 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-n69rk"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.151092 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.152103 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.152924 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.153894 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.157663 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.159507 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lzlm4"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.160536 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.165545 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.169220 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.169235 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gmp65"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.169513 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.171781 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.172766 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.175375 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.177966 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.182366 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.182410 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q7prd"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.182424 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.182556 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.188380 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.189463 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.191300 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-5nggf"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.194932 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bhsw7"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.195043 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.195193 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.196662 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.197940 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7z7j"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.199694 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.200136 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cl8l9"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.203081 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.204880 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gnspz"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.206719 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-55dsj"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.208093 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.209125 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-m8pjn"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.210250 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-lj688"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.211427 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.212900 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8zjdt"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.214648 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.214743 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.215531 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-bpv6x"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.216279 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.217112 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.219405 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.219950 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn7s6\" (UniqueName: \"kubernetes.io/projected/aa17085d-69af-43ec-8abe-51906d32cd5f-kube-api-access-tn7s6\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.220057 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-policies\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.220197 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-etcd-client\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.220952 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-encryption-config\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.220991 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221020 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pv7js\" (UniqueName: \"kubernetes.io/projected/e29ddaa7-6347-4254-bec7-d84e84cd57bd-kube-api-access-pv7js\") pod \"dns-operator-744455d44c-nl25w\" (UID: \"e29ddaa7-6347-4254-bec7-d84e84cd57bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.220866 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221043 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e664390-b33c-4aa5-972c-732c8ca37fda-serving-cert\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221114 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzrrm\" (UniqueName: \"kubernetes.io/projected/0951d4d1-034f-4968-b8ca-a5016d5b38d6-kube-api-access-kzrrm\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221139 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5731e3c-f903-4516-8c08-43113e79a4ba-serving-cert\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221158 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d12a6e-d830-4357-b372-9163d663448f-config\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221194 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-serving-cert\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221213 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bflbq\" (UniqueName: \"kubernetes.io/projected/3e664390-b33c-4aa5-972c-732c8ca37fda-kube-api-access-bflbq\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221235 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221265 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951d4d1-034f-4968-b8ca-a5016d5b38d6-serving-cert\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221283 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221302 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5pwt\" (UniqueName: \"kubernetes.io/projected/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-kube-api-access-n5pwt\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221322 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-dir\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221345 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221372 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twlt9\" (UniqueName: \"kubernetes.io/projected/3bbf873e-72f0-4743-a2bc-4866dd8b8f86-kube-api-access-twlt9\") pod \"downloads-7954f5f757-bhsw7\" (UID: \"3bbf873e-72f0-4743-a2bc-4866dd8b8f86\") " pod="openshift-console/downloads-7954f5f757-bhsw7" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221391 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221412 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnphk\" (UniqueName: \"kubernetes.io/projected/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-kube-api-access-hnphk\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/26d12a6e-d830-4357-b372-9163d663448f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221460 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25r6m\" (UniqueName: \"kubernetes.io/projected/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-kube-api-access-25r6m\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221483 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221502 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221526 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-audit-dir\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221558 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221579 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221599 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct7p7\" (UniqueName: \"kubernetes.io/projected/b5731e3c-f903-4516-8c08-43113e79a4ba-kube-api-access-ct7p7\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221643 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-config\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221662 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5bd8t\" (UniqueName: \"kubernetes.io/projected/855a2824-4e4a-4eae-9e71-3bc0db42f169-kube-api-access-5bd8t\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.220610 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-policies\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221685 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221706 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-audit-policies\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221726 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91805ba9-a3ff-4470-9302-cc2de796c19a-config\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221746 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1284b6e4-1c2c-443e-b18d-163396ede328-config-volume\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221768 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-config\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221787 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/855a2824-4e4a-4eae-9e71-3bc0db42f169-serving-cert\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221806 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-serving-cert\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221823 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-oauth-config\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221856 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b5731e3c-f903-4516-8c08-43113e79a4ba-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221877 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-image-import-ca\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221899 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d71bdc-a8da-44da-a448-8ee75981e31c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221921 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwgt9\" (UniqueName: \"kubernetes.io/projected/b710111d-81c5-463d-b2ea-f7f3f5e27b90-kube-api-access-fwgt9\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221940 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-service-ca\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221973 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b710111d-81c5-463d-b2ea-f7f3f5e27b90-serving-cert\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.221993 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222013 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5gtp\" (UniqueName: \"kubernetes.io/projected/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-kube-api-access-d5gtp\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222033 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t45r\" (UniqueName: \"kubernetes.io/projected/d5929ffa-b478-440c-8efe-bad4b8f21e4e-kube-api-access-7t45r\") pod \"cluster-samples-operator-665b6dd947-77hdx\" (UID: \"d5929ffa-b478-440c-8efe-bad4b8f21e4e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222056 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-client-ca\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222076 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91805ba9-a3ff-4470-9302-cc2de796c19a-serving-cert\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222093 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91805ba9-a3ff-4470-9302-cc2de796c19a-trusted-ca\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222111 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-config\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222137 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-service-ca-bundle\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222156 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-etcd-client\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222172 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-service-ca\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222207 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-trusted-ca-bundle\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222224 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-audit-dir\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222242 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-config\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222262 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qqrs8\" (UniqueName: \"kubernetes.io/projected/3de1e003-2dee-4d76-86cd-cd60680535bd-kube-api-access-qqrs8\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222279 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222297 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nrsl\" (UniqueName: \"kubernetes.io/projected/f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624-kube-api-access-4nrsl\") pod \"migrator-59844c95c7-pjlg4\" (UID: \"f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222317 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222337 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t4m4\" (UniqueName: \"kubernetes.io/projected/f6f38275-7eca-41e7-81a7-0bc5233ba757-kube-api-access-2t4m4\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222357 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222374 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tq2j\" (UniqueName: \"kubernetes.io/projected/1284b6e4-1c2c-443e-b18d-163396ede328-kube-api-access-5tq2j\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222396 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd8cj\" (UniqueName: \"kubernetes.io/projected/26d12a6e-d830-4357-b372-9163d663448f-kube-api-access-vd8cj\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222413 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-config\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222433 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-oauth-serving-cert\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222451 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghdvl\" (UniqueName: \"kubernetes.io/projected/91805ba9-a3ff-4470-9302-cc2de796c19a-kube-api-access-ghdvl\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222460 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222578 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lzlm4"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222702 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222867 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-dir\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.222961 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-audit-dir\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.223426 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.223511 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-config\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.223525 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-config\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.224167 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-audit-dir\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.224604 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-config\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.224809 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-client-ca\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225047 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225124 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-service-ca-bundle\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225149 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f38275-7eca-41e7-81a7-0bc5233ba757-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225212 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1284b6e4-1c2c-443e-b18d-163396ede328-secret-volume\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225241 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225265 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-encryption-config\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225282 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-trusted-ca-bundle\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225293 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-client-ca\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225134 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-service-ca\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225416 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-config\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225440 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225462 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-node-pullsecrets\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225488 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx8zr\" (UniqueName: \"kubernetes.io/projected/d889f5d6-d274-4604-bb80-1529caf804d0-kube-api-access-mx8zr\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225525 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225548 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225572 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5929ffa-b478-440c-8efe-bad4b8f21e4e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hdx\" (UID: \"d5929ffa-b478-440c-8efe-bad4b8f21e4e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225619 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa17085d-69af-43ec-8abe-51906d32cd5f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225642 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d889f5d6-d274-4604-bb80-1529caf804d0-auth-proxy-config\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225683 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d889f5d6-d274-4604-bb80-1529caf804d0-config\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225712 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26d12a6e-d830-4357-b372-9163d663448f-images\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225742 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-serving-cert\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225955 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b5731e3c-f903-4516-8c08-43113e79a4ba-available-featuregates\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.226020 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-oauth-serving-cert\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.226108 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-node-pullsecrets\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.226433 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-encryption-config\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.226515 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-serving-cert\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.226520 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-client-ca\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.227384 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-trusted-ca-bundle\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.227389 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/26d12a6e-d830-4357-b372-9163d663448f-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228191 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3e664390-b33c-4aa5-972c-732c8ca37fda-serving-cert\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228238 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-image-import-ca\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228247 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-config\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.225745 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-config\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228331 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228382 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-audit\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228469 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-kube-api-access-nqcks\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228505 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228538 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d71bdc-a8da-44da-a448-8ee75981e31c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228570 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228632 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e29ddaa7-6347-4254-bec7-d84e84cd57bd-metrics-tls\") pod \"dns-operator-744455d44c-nl25w\" (UID: \"e29ddaa7-6347-4254-bec7-d84e84cd57bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228748 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kk68m\" (UniqueName: \"kubernetes.io/projected/e1d71bdc-a8da-44da-a448-8ee75981e31c-kube-api-access-kk68m\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228837 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-client\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228872 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-audit-policies\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228911 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228945 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.228975 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d889f5d6-d274-4604-bb80-1529caf804d0-machine-approver-tls\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.229005 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa17085d-69af-43ec-8abe-51906d32cd5f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.229794 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-config\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.230024 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/26d12a6e-d830-4357-b372-9163d663448f-config\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.230174 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0951d4d1-034f-4968-b8ca-a5016d5b38d6-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.230277 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.230484 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-etcd-serving-ca\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.230491 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.230624 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-service-ca\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.230681 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/26d12a6e-d830-4357-b372-9163d663448f-images\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.231084 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.231167 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-ca\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.231232 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-audit\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.231297 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa17085d-69af-43ec-8abe-51906d32cd5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.231387 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f38275-7eca-41e7-81a7-0bc5233ba757-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.231797 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-ca\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.232068 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-encryption-config\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.232144 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.232461 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.232793 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.232802 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/3e664390-b33c-4aa5-972c-732c8ca37fda-etcd-client\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.233250 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-serving-cert\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.233540 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-config\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.234155 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.234449 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/855a2824-4e4a-4eae-9e71-3bc0db42f169-serving-cert\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.234522 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.234631 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.234826 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.234878 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.235343 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-serving-cert\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.235350 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.236121 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.236167 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.236781 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-etcd-client\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.237913 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.238412 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.239435 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.239448 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5731e3c-f903-4516-8c08-43113e79a4ba-serving-cert\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.239516 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-oauth-config\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.239560 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b710111d-81c5-463d-b2ea-f7f3f5e27b90-serving-cert\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.239611 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.240308 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0951d4d1-034f-4968-b8ca-a5016d5b38d6-serving-cert\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.241968 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-etcd-client\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.242281 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s45vs"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.243477 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.244949 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89q5w"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.249020 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.249750 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.252055 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kfxs6"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.254272 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nl25w"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.255865 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km9ss"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.257534 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5nggf"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.258627 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.259836 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8zjdt"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.260950 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.262122 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gmp65"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.263157 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.264382 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qvsn8"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.265315 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.265453 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qvsn8"] Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.268750 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.288210 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.307910 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.328585 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332287 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcgr4\" (UniqueName: \"kubernetes.io/projected/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-kube-api-access-vcgr4\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332413 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl7hw\" (UniqueName: \"kubernetes.io/projected/ee155f68-76dd-411e-8617-05e452690cdf-kube-api-access-wl7hw\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332476 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdbr\" (UniqueName: \"kubernetes.io/projected/a334b9f5-9e47-48a5-97e2-481df00ce760-kube-api-access-fvdbr\") pod \"package-server-manager-789f6589d5-5pcgl\" (UID: \"a334b9f5-9e47-48a5-97e2-481df00ce760\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332506 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee155f68-76dd-411e-8617-05e452690cdf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332537 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twlt9\" (UniqueName: \"kubernetes.io/projected/3bbf873e-72f0-4743-a2bc-4866dd8b8f86-kube-api-access-twlt9\") pod \"downloads-7954f5f757-bhsw7\" (UID: \"3bbf873e-72f0-4743-a2bc-4866dd8b8f86\") " pod="openshift-console/downloads-7954f5f757-bhsw7" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332583 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332609 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/14aac296-ac45-4e74-91c1-069313c31337-srv-cert\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332652 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332677 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5935db5e-10d8-40a9-bc7c-102a18d42401-images\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332699 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-config\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332857 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91805ba9-a3ff-4470-9302-cc2de796c19a-config\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332891 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1284b6e4-1c2c-443e-b18d-163396ede328-config-volume\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332923 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d71bdc-a8da-44da-a448-8ee75981e31c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.332951 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-apiservice-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333039 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdslk\" (UniqueName: \"kubernetes.io/projected/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-kube-api-access-cdslk\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333187 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91805ba9-a3ff-4470-9302-cc2de796c19a-serving-cert\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333382 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91805ba9-a3ff-4470-9302-cc2de796c19a-trusted-ca\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333457 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a334b9f5-9e47-48a5-97e2-481df00ce760-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5pcgl\" (UID: \"a334b9f5-9e47-48a5-97e2-481df00ce760\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333559 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrcz2\" (UniqueName: \"kubernetes.io/projected/14aac296-ac45-4e74-91c1-069313c31337-kube-api-access-nrcz2\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333587 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/350b2f42-3a86-4113-9dd2-bfe644158993-signing-key\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333613 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399886aa-6188-4575-905d-ae9888853692-cert\") pod \"ingress-canary-5nggf\" (UID: \"399886aa-6188-4575-905d-ae9888853692\") " pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333630 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2hlql\" (UniqueName: \"kubernetes.io/projected/350b2f42-3a86-4113-9dd2-bfe644158993-kube-api-access-2hlql\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333653 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333677 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t4m4\" (UniqueName: \"kubernetes.io/projected/f6f38275-7eca-41e7-81a7-0bc5233ba757-kube-api-access-2t4m4\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333696 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333741 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tq2j\" (UniqueName: \"kubernetes.io/projected/1284b6e4-1c2c-443e-b18d-163396ede328-kube-api-access-5tq2j\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333781 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghdvl\" (UniqueName: \"kubernetes.io/projected/91805ba9-a3ff-4470-9302-cc2de796c19a-kube-api-access-ghdvl\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333809 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4wwf\" (UniqueName: \"kubernetes.io/projected/80ea1e3c-71c9-4fa8-bd21-15e217d09023-kube-api-access-s4wwf\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333829 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1284b6e4-1c2c-443e-b18d-163396ede328-secret-volume\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333865 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee155f68-76dd-411e-8617-05e452690cdf-srv-cert\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333919 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f38275-7eca-41e7-81a7-0bc5233ba757-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333951 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30774ea6-14da-4a74-9090-797c655dd601-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.333985 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lsvv\" (UniqueName: \"kubernetes.io/projected/b6045acf-39a2-42d5-a92f-7ceb260e6e43-kube-api-access-4lsvv\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5929ffa-b478-440c-8efe-bad4b8f21e4e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hdx\" (UID: \"d5929ffa-b478-440c-8efe-bad4b8f21e4e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334042 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa17085d-69af-43ec-8abe-51906d32cd5f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334064 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlrjp\" (UniqueName: \"kubernetes.io/projected/84260b20-4df9-4dea-9524-bd9c18ef7074-kube-api-access-zlrjp\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334084 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-webhook-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334169 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80ea1e3c-71c9-4fa8-bd21-15e217d09023-tmpfs\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334203 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d71bdc-a8da-44da-a448-8ee75981e31c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334226 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fl86f\" (UniqueName: \"kubernetes.io/projected/5935db5e-10d8-40a9-bc7c-102a18d42401-kube-api-access-fl86f\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334245 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30774ea6-14da-4a74-9090-797c655dd601-config\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334283 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e29ddaa7-6347-4254-bec7-d84e84cd57bd-metrics-tls\") pod \"dns-operator-744455d44c-nl25w\" (UID: \"e29ddaa7-6347-4254-bec7-d84e84cd57bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334381 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kk68m\" (UniqueName: \"kubernetes.io/projected/e1d71bdc-a8da-44da-a448-8ee75981e31c-kube-api-access-kk68m\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334535 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psnhx\" (UniqueName: \"kubernetes.io/projected/1af108ab-bba9-4de6-bcbc-601fcba9e197-kube-api-access-psnhx\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334633 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f38275-7eca-41e7-81a7-0bc5233ba757-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334777 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-metrics-certs\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334954 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6f38275-7eca-41e7-81a7-0bc5233ba757-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.334978 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335146 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335289 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335372 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c050b374-23f2-4a98-af19-fee47a82a879-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn4h4\" (UID: \"c050b374-23f2-4a98-af19-fee47a82a879\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335507 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-proxy-tls\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335630 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5gtp\" (UniqueName: \"kubernetes.io/projected/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-kube-api-access-d5gtp\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335717 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t45r\" (UniqueName: \"kubernetes.io/projected/d5929ffa-b478-440c-8efe-bad4b8f21e4e-kube-api-access-7t45r\") pod \"cluster-samples-operator-665b6dd947-77hdx\" (UID: \"d5929ffa-b478-440c-8efe-bad4b8f21e4e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335820 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/350b2f42-3a86-4113-9dd2-bfe644158993-signing-cabundle\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.335959 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfzcl\" (UniqueName: \"kubernetes.io/projected/399886aa-6188-4575-905d-ae9888853692-kube-api-access-rfzcl\") pod \"ingress-canary-5nggf\" (UID: \"399886aa-6188-4575-905d-ae9888853692\") " pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336060 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nrsl\" (UniqueName: \"kubernetes.io/projected/f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624-kube-api-access-4nrsl\") pod \"migrator-59844c95c7-pjlg4\" (UID: \"f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336183 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-service-ca-bundle\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336286 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-certs\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336369 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30774ea6-14da-4a74-9090-797c655dd601-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336426 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-stats-auth\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336492 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-default-certificate\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336531 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336570 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d889f5d6-d274-4604-bb80-1529caf804d0-auth-proxy-config\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336666 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d889f5d6-d274-4604-bb80-1529caf804d0-config\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336703 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx8zr\" (UniqueName: \"kubernetes.io/projected/d889f5d6-d274-4604-bb80-1529caf804d0-kube-api-access-mx8zr\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336734 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5935db5e-10d8-40a9-bc7c-102a18d42401-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336762 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af108ab-bba9-4de6-bcbc-601fcba9e197-serving-cert\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336789 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/14aac296-ac45-4e74-91c1-069313c31337-profile-collector-cert\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336817 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6hs4\" (UniqueName: \"kubernetes.io/projected/c050b374-23f2-4a98-af19-fee47a82a879-kube-api-access-r6hs4\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn4h4\" (UID: \"c050b374-23f2-4a98-af19-fee47a82a879\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.336924 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjwdq\" (UniqueName: \"kubernetes.io/projected/c8c016d5-5c1f-4680-a678-8568d218617e-kube-api-access-mjwdq\") pod \"auto-csr-approver-29536856-lj688\" (UID: \"c8c016d5-5c1f-4680-a678-8568d218617e\") " pod="openshift-infra/auto-csr-approver-29536856-lj688" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337016 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337060 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337095 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d889f5d6-d274-4604-bb80-1529caf804d0-machine-approver-tls\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337153 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa17085d-69af-43ec-8abe-51906d32cd5f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337208 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d889f5d6-d274-4604-bb80-1529caf804d0-auth-proxy-config\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337249 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af108ab-bba9-4de6-bcbc-601fcba9e197-config\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337293 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa17085d-69af-43ec-8abe-51906d32cd5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337331 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-node-bootstrap-token\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337401 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pv7js\" (UniqueName: \"kubernetes.io/projected/e29ddaa7-6347-4254-bec7-d84e84cd57bd-kube-api-access-pv7js\") pod \"dns-operator-744455d44c-nl25w\" (UID: \"e29ddaa7-6347-4254-bec7-d84e84cd57bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337481 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn7s6\" (UniqueName: \"kubernetes.io/projected/aa17085d-69af-43ec-8abe-51906d32cd5f-kube-api-access-tn7s6\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337669 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5935db5e-10d8-40a9-bc7c-102a18d42401-proxy-tls\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337721 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-trusted-ca\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.337729 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.338139 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d889f5d6-d274-4604-bb80-1529caf804d0-config\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.338174 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6f38275-7eca-41e7-81a7-0bc5233ba757-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.339559 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/d5929ffa-b478-440c-8efe-bad4b8f21e4e-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-77hdx\" (UID: \"d5929ffa-b478-440c-8efe-bad4b8f21e4e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.339805 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-metrics-tls\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.341075 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/d889f5d6-d274-4604-bb80-1529caf804d0-machine-approver-tls\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.341868 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.349777 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.362953 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/e29ddaa7-6347-4254-bec7-d84e84cd57bd-metrics-tls\") pod \"dns-operator-744455d44c-nl25w\" (UID: \"e29ddaa7-6347-4254-bec7-d84e84cd57bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.368072 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.377027 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.389154 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.407902 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.422748 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/aa17085d-69af-43ec-8abe-51906d32cd5f-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.436008 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.438974 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5935db5e-10d8-40a9-bc7c-102a18d42401-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439034 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af108ab-bba9-4de6-bcbc-601fcba9e197-serving-cert\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439074 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/14aac296-ac45-4e74-91c1-069313c31337-profile-collector-cert\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439111 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r6hs4\" (UniqueName: \"kubernetes.io/projected/c050b374-23f2-4a98-af19-fee47a82a879-kube-api-access-r6hs4\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn4h4\" (UID: \"c050b374-23f2-4a98-af19-fee47a82a879\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439153 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mjwdq\" (UniqueName: \"kubernetes.io/projected/c8c016d5-5c1f-4680-a678-8568d218617e-kube-api-access-mjwdq\") pod \"auto-csr-approver-29536856-lj688\" (UID: \"c8c016d5-5c1f-4680-a678-8568d218617e\") " pod="openshift-infra/auto-csr-approver-29536856-lj688" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439214 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af108ab-bba9-4de6-bcbc-601fcba9e197-config\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439298 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-node-bootstrap-token\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439353 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5935db5e-10d8-40a9-bc7c-102a18d42401-proxy-tls\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439397 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439440 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcgr4\" (UniqueName: \"kubernetes.io/projected/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-kube-api-access-vcgr4\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439483 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl7hw\" (UniqueName: \"kubernetes.io/projected/ee155f68-76dd-411e-8617-05e452690cdf-kube-api-access-wl7hw\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439530 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdbr\" (UniqueName: \"kubernetes.io/projected/a334b9f5-9e47-48a5-97e2-481df00ce760-kube-api-access-fvdbr\") pod \"package-server-manager-789f6589d5-5pcgl\" (UID: \"a334b9f5-9e47-48a5-97e2-481df00ce760\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439667 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee155f68-76dd-411e-8617-05e452690cdf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439756 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5935db5e-10d8-40a9-bc7c-102a18d42401-auth-proxy-config\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439801 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/14aac296-ac45-4e74-91c1-069313c31337-srv-cert\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439868 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.439920 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-config\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440009 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5935db5e-10d8-40a9-bc7c-102a18d42401-images\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440067 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-apiservice-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440242 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a334b9f5-9e47-48a5-97e2-481df00ce760-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5pcgl\" (UID: \"a334b9f5-9e47-48a5-97e2-481df00ce760\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440288 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cdslk\" (UniqueName: \"kubernetes.io/projected/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-kube-api-access-cdslk\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440332 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrcz2\" (UniqueName: \"kubernetes.io/projected/14aac296-ac45-4e74-91c1-069313c31337-kube-api-access-nrcz2\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440371 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/350b2f42-3a86-4113-9dd2-bfe644158993-signing-key\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440411 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399886aa-6188-4575-905d-ae9888853692-cert\") pod \"ingress-canary-5nggf\" (UID: \"399886aa-6188-4575-905d-ae9888853692\") " pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.440452 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.441365 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.441710 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2hlql\" (UniqueName: \"kubernetes.io/projected/350b2f42-3a86-4113-9dd2-bfe644158993-kube-api-access-2hlql\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.441801 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4wwf\" (UniqueName: \"kubernetes.io/projected/80ea1e3c-71c9-4fa8-bd21-15e217d09023-kube-api-access-s4wwf\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.441841 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee155f68-76dd-411e-8617-05e452690cdf-srv-cert\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.441923 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lsvv\" (UniqueName: \"kubernetes.io/projected/b6045acf-39a2-42d5-a92f-7ceb260e6e43-kube-api-access-4lsvv\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.441967 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30774ea6-14da-4a74-9090-797c655dd601-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442045 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlrjp\" (UniqueName: \"kubernetes.io/projected/84260b20-4df9-4dea-9524-bd9c18ef7074-kube-api-access-zlrjp\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442090 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-webhook-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442152 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fl86f\" (UniqueName: \"kubernetes.io/projected/5935db5e-10d8-40a9-bc7c-102a18d42401-kube-api-access-fl86f\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442198 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30774ea6-14da-4a74-9090-797c655dd601-config\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442242 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80ea1e3c-71c9-4fa8-bd21-15e217d09023-tmpfs\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442302 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psnhx\" (UniqueName: \"kubernetes.io/projected/1af108ab-bba9-4de6-bcbc-601fcba9e197-kube-api-access-psnhx\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442359 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-metrics-certs\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442470 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442515 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442553 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c050b374-23f2-4a98-af19-fee47a82a879-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn4h4\" (UID: \"c050b374-23f2-4a98-af19-fee47a82a879\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442604 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-proxy-tls\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442667 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/80ea1e3c-71c9-4fa8-bd21-15e217d09023-tmpfs\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442699 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/350b2f42-3a86-4113-9dd2-bfe644158993-signing-cabundle\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442733 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rfzcl\" (UniqueName: \"kubernetes.io/projected/399886aa-6188-4575-905d-ae9888853692-kube-api-access-rfzcl\") pod \"ingress-canary-5nggf\" (UID: \"399886aa-6188-4575-905d-ae9888853692\") " pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442797 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-service-ca-bundle\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442837 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-certs\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442898 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30774ea6-14da-4a74-9090-797c655dd601-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442938 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-stats-auth\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.442971 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-default-certificate\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.446679 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/aa17085d-69af-43ec-8abe-51906d32cd5f-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.468369 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.488979 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.509492 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.518011 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91805ba9-a3ff-4470-9302-cc2de796c19a-serving-cert\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.541945 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.548401 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91805ba9-a3ff-4470-9302-cc2de796c19a-trusted-ca\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.549269 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.555107 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91805ba9-a3ff-4470-9302-cc2de796c19a-config\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.593725 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.594286 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.609191 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.633707 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.637151 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e1d71bdc-a8da-44da-a448-8ee75981e31c-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.649161 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.656481 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1d71bdc-a8da-44da-a448-8ee75981e31c-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.669397 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.691229 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.709717 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.714005 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/ee155f68-76dd-411e-8617-05e452690cdf-profile-collector-cert\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.717891 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1284b6e4-1c2c-443e-b18d-163396ede328-secret-volume\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.724177 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/14aac296-ac45-4e74-91c1-069313c31337-profile-collector-cert\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.730361 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.735341 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1284b6e4-1c2c-443e-b18d-163396ede328-config-volume\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.749154 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.771246 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.789716 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.795623 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/14aac296-ac45-4e74-91c1-069313c31337-srv-cert\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.808686 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.829274 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.849186 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.868910 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.878245 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.888843 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.891453 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-config\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.908765 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.917270 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/ee155f68-76dd-411e-8617-05e452690cdf-srv-cert\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.930368 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.931179 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5935db5e-10d8-40a9-bc7c-102a18d42401-images\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.949705 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.968728 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.974221 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5935db5e-10d8-40a9-bc7c-102a18d42401-proxy-tls\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:07 crc kubenswrapper[4708]: I0227 16:57:07.989596 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.009166 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.029165 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.035427 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/350b2f42-3a86-4113-9dd2-bfe644158993-signing-key\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.048623 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.055005 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/350b2f42-3a86-4113-9dd2-bfe644158993-signing-cabundle\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.069448 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.089648 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.109343 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.118041 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c050b374-23f2-4a98-af19-fee47a82a879-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn4h4\" (UID: \"c050b374-23f2-4a98-af19-fee47a82a879\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.129565 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.149168 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.167510 4708 request.go:700] Waited for 1.017165031s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dinstallation-pull-secrets&limit=500&resourceVersion=0 Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.169121 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.188992 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.208683 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.228304 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.236746 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-stats-auth\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.248805 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.268095 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.276565 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-metrics-certs\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.288478 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.297311 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-default-certificate\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.310589 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.328990 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.348804 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.355161 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-service-ca-bundle\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.368791 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.375895 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a334b9f5-9e47-48a5-97e2-481df00ce760-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5pcgl\" (UID: \"a334b9f5-9e47-48a5-97e2-481df00ce760\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.389375 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.400452 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-proxy-tls\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.409743 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.429265 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439241 4708 secret.go:188] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439346 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1af108ab-bba9-4de6-bcbc-601fcba9e197-serving-cert podName:1af108ab-bba9-4de6-bcbc-601fcba9e197 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.939316884 +0000 UTC m=+227.455114511 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1af108ab-bba9-4de6-bcbc-601fcba9e197-serving-cert") pod "service-ca-operator-777779d784-qtj4l" (UID: "1af108ab-bba9-4de6-bcbc-601fcba9e197") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439489 4708 secret.go:188] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439506 4708 secret.go:188] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439595 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics podName:84260b20-4df9-4dea-9524-bd9c18ef7074 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.93956499 +0000 UTC m=+227.455362617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics") pod "marketplace-operator-79b997595-lzlm4" (UID: "84260b20-4df9-4dea-9524-bd9c18ef7074") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439593 4708 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439625 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-node-bootstrap-token podName:b6045acf-39a2-42d5-a92f-7ceb260e6e43 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.939611421 +0000 UTC m=+227.455409048 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-node-bootstrap-token") pod "machine-config-server-bpv6x" (UID: "b6045acf-39a2-42d5-a92f-7ceb260e6e43") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.439686 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1af108ab-bba9-4de6-bcbc-601fcba9e197-config podName:1af108ab-bba9-4de6-bcbc-601fcba9e197 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.939661153 +0000 UTC m=+227.455458770 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/1af108ab-bba9-4de6-bcbc-601fcba9e197-config") pod "service-ca-operator-777779d784-qtj4l" (UID: "1af108ab-bba9-4de6-bcbc-601fcba9e197") : failed to sync configmap cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.440373 4708 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.440463 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-apiservice-cert podName:80ea1e3c-71c9-4fa8-bd21-15e217d09023 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.940442033 +0000 UTC m=+227.456239650 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-apiservice-cert") pod "packageserver-d55dfcdfc-rmvrl" (UID: "80ea1e3c-71c9-4fa8-bd21-15e217d09023") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.440621 4708 secret.go:188] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.440746 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/399886aa-6188-4575-905d-ae9888853692-cert podName:399886aa-6188-4575-905d-ae9888853692 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.94072306 +0000 UTC m=+227.456520687 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/399886aa-6188-4575-905d-ae9888853692-cert") pod "ingress-canary-5nggf" (UID: "399886aa-6188-4575-905d-ae9888853692") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.444923 4708 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445000 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-webhook-cert podName:80ea1e3c-71c9-4fa8-bd21-15e217d09023 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.944979132 +0000 UTC m=+227.460776759 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-webhook-cert") pod "packageserver-d55dfcdfc-rmvrl" (UID: "80ea1e3c-71c9-4fa8-bd21-15e217d09023") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445003 4708 configmap.go:193] Couldn't get configMap openshift-kube-controller-manager-operator/kube-controller-manager-operator-config: failed to sync configmap cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445023 4708 configmap.go:193] Couldn't get configMap openshift-marketplace/marketplace-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445035 4708 secret.go:188] Couldn't get secret openshift-kube-controller-manager-operator/kube-controller-manager-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445035 4708 secret.go:188] Couldn't get secret openshift-machine-config-operator/machine-config-server-tls: failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445083 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/30774ea6-14da-4a74-9090-797c655dd601-config podName:30774ea6-14da-4a74-9090-797c655dd601 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.945065075 +0000 UTC m=+227.460862692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/30774ea6-14da-4a74-9090-797c655dd601-config") pod "kube-controller-manager-operator-78b949d7b-lmkxt" (UID: "30774ea6-14da-4a74-9090-797c655dd601") : failed to sync configmap cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445116 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-certs podName:b6045acf-39a2-42d5-a92f-7ceb260e6e43 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.945093865 +0000 UTC m=+227.460891482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "certs" (UniqueName: "kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-certs") pod "machine-config-server-bpv6x" (UID: "b6045acf-39a2-42d5-a92f-7ceb260e6e43") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445144 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca podName:84260b20-4df9-4dea-9524-bd9c18ef7074 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.945131986 +0000 UTC m=+227.460929613 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-trusted-ca" (UniqueName: "kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca") pod "marketplace-operator-79b997595-lzlm4" (UID: "84260b20-4df9-4dea-9524-bd9c18ef7074") : failed to sync configmap cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: E0227 16:57:08.445171 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30774ea6-14da-4a74-9090-797c655dd601-serving-cert podName:30774ea6-14da-4a74-9090-797c655dd601 nodeName:}" failed. No retries permitted until 2026-02-27 16:57:08.945157617 +0000 UTC m=+227.460955244 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/30774ea6-14da-4a74-9090-797c655dd601-serving-cert") pod "kube-controller-manager-operator-78b949d7b-lmkxt" (UID: "30774ea6-14da-4a74-9090-797c655dd601") : failed to sync secret cache: timed out waiting for the condition Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.459992 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.469074 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.489461 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.509324 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.529172 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.549296 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.569454 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.590472 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.609268 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.628457 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.649016 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.670270 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.689350 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.709385 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.728656 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.749789 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.769522 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.789532 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.808657 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.828799 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.848665 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.868513 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.889096 4708 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.908526 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.929065 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.948828 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.976993 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.977142 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-certs\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.977198 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af108ab-bba9-4de6-bcbc-601fcba9e197-serving-cert\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.977271 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af108ab-bba9-4de6-bcbc-601fcba9e197-config\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.977310 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-node-bootstrap-token\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.977341 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.978136 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-apiservice-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.978201 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399886aa-6188-4575-905d-ae9888853692-cert\") pod \"ingress-canary-5nggf\" (UID: \"399886aa-6188-4575-905d-ae9888853692\") " pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.978292 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30774ea6-14da-4a74-9090-797c655dd601-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.978354 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-webhook-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.978394 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30774ea6-14da-4a74-9090-797c655dd601-config\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.978455 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1af108ab-bba9-4de6-bcbc-601fcba9e197-config\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.980081 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/30774ea6-14da-4a74-9090-797c655dd601-config\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.981210 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.982739 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-certs\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.984064 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-webhook-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.984483 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.984703 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/b6045acf-39a2-42d5-a92f-7ceb260e6e43-node-bootstrap-token\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.984496 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1af108ab-bba9-4de6-bcbc-601fcba9e197-serving-cert\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.985254 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/30774ea6-14da-4a74-9090-797c655dd601-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.985521 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/399886aa-6188-4575-905d-ae9888853692-cert\") pod \"ingress-canary-5nggf\" (UID: \"399886aa-6188-4575-905d-ae9888853692\") " pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.985543 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/80ea1e3c-71c9-4fa8-bd21-15e217d09023-apiservice-cert\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:08 crc kubenswrapper[4708]: I0227 16:57:08.994227 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5pwt\" (UniqueName: \"kubernetes.io/projected/0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18-kube-api-access-n5pwt\") pod \"apiserver-7bbb656c7d-nw84d\" (UID: \"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.013200 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25r6m\" (UniqueName: \"kubernetes.io/projected/f47bdbdf-3cea-4337-be67-8b5f60ac8d09-kube-api-access-25r6m\") pod \"openshift-apiserver-operator-796bbdcf4f-vcbnb\" (UID: \"f47bdbdf-3cea-4337-be67-8b5f60ac8d09\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.034923 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bflbq\" (UniqueName: \"kubernetes.io/projected/3e664390-b33c-4aa5-972c-732c8ca37fda-kube-api-access-bflbq\") pod \"etcd-operator-b45778765-gnspz\" (UID: \"3e664390-b33c-4aa5-972c-732c8ca37fda\") " pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.053908 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct7p7\" (UniqueName: \"kubernetes.io/projected/b5731e3c-f903-4516-8c08-43113e79a4ba-kube-api-access-ct7p7\") pod \"openshift-config-operator-7777fb866f-qg7fq\" (UID: \"b5731e3c-f903-4516-8c08-43113e79a4ba\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.075603 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5bd8t\" (UniqueName: \"kubernetes.io/projected/855a2824-4e4a-4eae-9e71-3bc0db42f169-kube-api-access-5bd8t\") pod \"route-controller-manager-6576b87f9c-gxppm\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.094919 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnphk\" (UniqueName: \"kubernetes.io/projected/96b1d3f2-9f87-4beb-9e2c-e6006fa90e65-kube-api-access-hnphk\") pod \"apiserver-76f77b778f-d7z7j\" (UID: \"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65\") " pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.115268 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd8cj\" (UniqueName: \"kubernetes.io/projected/26d12a6e-d830-4357-b372-9163d663448f-kube-api-access-vd8cj\") pod \"machine-api-operator-5694c8668f-s45vs\" (UID: \"26d12a6e-d830-4357-b372-9163d663448f\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.128170 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.134770 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwgt9\" (UniqueName: \"kubernetes.io/projected/b710111d-81c5-463d-b2ea-f7f3f5e27b90-kube-api-access-fwgt9\") pod \"controller-manager-879f6c89f-km9ss\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.158813 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzrrm\" (UniqueName: \"kubernetes.io/projected/0951d4d1-034f-4968-b8ca-a5016d5b38d6-kube-api-access-kzrrm\") pod \"authentication-operator-69f744f599-q7prd\" (UID: \"0951d4d1-034f-4968-b8ca-a5016d5b38d6\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.160506 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.168001 4708 request.go:700] Waited for 1.937052874s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/serviceaccounts/console/token Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.175459 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqrs8\" (UniqueName: \"kubernetes.io/projected/3de1e003-2dee-4d76-86cd-cd60680535bd-kube-api-access-qqrs8\") pod \"oauth-openshift-558db77b4-55dsj\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.179128 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.200728 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-kube-api-access-nqcks\") pod \"console-f9d7485db-cl8l9\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.210026 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.216484 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.230497 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.234805 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.249718 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.251519 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.262167 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.271716 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.276448 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.284281 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.295933 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twlt9\" (UniqueName: \"kubernetes.io/projected/3bbf873e-72f0-4743-a2bc-4866dd8b8f86-kube-api-access-twlt9\") pod \"downloads-7954f5f757-bhsw7\" (UID: \"3bbf873e-72f0-4743-a2bc-4866dd8b8f86\") " pod="openshift-console/downloads-7954f5f757-bhsw7" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.311134 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-bhsw7" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.313501 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t4m4\" (UniqueName: \"kubernetes.io/projected/f6f38275-7eca-41e7-81a7-0bc5233ba757-kube-api-access-2t4m4\") pod \"openshift-controller-manager-operator-756b6f6bc6-pwk5v\" (UID: \"f6f38275-7eca-41e7-81a7-0bc5233ba757\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.326724 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tq2j\" (UniqueName: \"kubernetes.io/projected/1284b6e4-1c2c-443e-b18d-163396ede328-kube-api-access-5tq2j\") pod \"collect-profiles-29536845-ws52p\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.348367 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/5af2f048-e8b4-449c-8c5d-e4c81f2437d4-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-4nnfn\" (UID: \"5af2f048-e8b4-449c-8c5d-e4c81f2437d4\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.367766 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghdvl\" (UniqueName: \"kubernetes.io/projected/91805ba9-a3ff-4470-9302-cc2de796c19a-kube-api-access-ghdvl\") pod \"console-operator-58897d9998-m8pjn\" (UID: \"91805ba9-a3ff-4470-9302-cc2de796c19a\") " pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.373576 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.381005 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kk68m\" (UniqueName: \"kubernetes.io/projected/e1d71bdc-a8da-44da-a448-8ee75981e31c-kube-api-access-kk68m\") pod \"kube-storage-version-migrator-operator-b67b599dd-tthbr\" (UID: \"e1d71bdc-a8da-44da-a448-8ee75981e31c\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.384983 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.392259 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.398744 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.403128 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5gtp\" (UniqueName: \"kubernetes.io/projected/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-kube-api-access-d5gtp\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.423621 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t45r\" (UniqueName: \"kubernetes.io/projected/d5929ffa-b478-440c-8efe-bad4b8f21e4e-kube-api-access-7t45r\") pod \"cluster-samples-operator-665b6dd947-77hdx\" (UID: \"d5929ffa-b478-440c-8efe-bad4b8f21e4e\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.443374 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.445118 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nrsl\" (UniqueName: \"kubernetes.io/projected/f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624-kube-api-access-4nrsl\") pod \"migrator-59844c95c7-pjlg4\" (UID: \"f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.463240 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx8zr\" (UniqueName: \"kubernetes.io/projected/d889f5d6-d274-4604-bb80-1529caf804d0-kube-api-access-mx8zr\") pod \"machine-approver-56656f9798-95bz9\" (UID: \"d889f5d6-d274-4604-bb80-1529caf804d0\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.480826 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf7d946e-7a0a-4d26-b3bb-ba0eb988994b-bound-sa-token\") pod \"ingress-operator-5b745b69d9-c2nfw\" (UID: \"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.504927 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/aa17085d-69af-43ec-8abe-51906d32cd5f-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.530898 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pv7js\" (UniqueName: \"kubernetes.io/projected/e29ddaa7-6347-4254-bec7-d84e84cd57bd-kube-api-access-pv7js\") pod \"dns-operator-744455d44c-nl25w\" (UID: \"e29ddaa7-6347-4254-bec7-d84e84cd57bd\") " pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.557375 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn7s6\" (UniqueName: \"kubernetes.io/projected/aa17085d-69af-43ec-8abe-51906d32cd5f-kube-api-access-tn7s6\") pod \"cluster-image-registry-operator-dc59b4c8b-hwfzq\" (UID: \"aa17085d-69af-43ec-8abe-51906d32cd5f\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.564512 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6hs4\" (UniqueName: \"kubernetes.io/projected/c050b374-23f2-4a98-af19-fee47a82a879-kube-api-access-r6hs4\") pod \"control-plane-machine-set-operator-78cbb6b69f-jn4h4\" (UID: \"c050b374-23f2-4a98-af19-fee47a82a879\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.581992 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mjwdq\" (UniqueName: \"kubernetes.io/projected/c8c016d5-5c1f-4680-a678-8568d218617e-kube-api-access-mjwdq\") pod \"auto-csr-approver-29536856-lj688\" (UID: \"c8c016d5-5c1f-4680-a678-8568d218617e\") " pod="openshift-infra/auto-csr-approver-29536856-lj688" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.592034 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.600307 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.607907 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcgr4\" (UniqueName: \"kubernetes.io/projected/f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8-kube-api-access-vcgr4\") pod \"machine-config-controller-84d6567774-bkpsd\" (UID: \"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.614415 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.620657 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.626246 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl7hw\" (UniqueName: \"kubernetes.io/projected/ee155f68-76dd-411e-8617-05e452690cdf-kube-api-access-wl7hw\") pod \"olm-operator-6b444d44fb-qpj27\" (UID: \"ee155f68-76dd-411e-8617-05e452690cdf\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.644094 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdbr\" (UniqueName: \"kubernetes.io/projected/a334b9f5-9e47-48a5-97e2-481df00ce760-kube-api-access-fvdbr\") pod \"package-server-manager-789f6589d5-5pcgl\" (UID: \"a334b9f5-9e47-48a5-97e2-481df00ce760\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.662210 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.668328 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.669622 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-r5zs6\" (UID: \"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.679994 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.681869 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cdslk\" (UniqueName: \"kubernetes.io/projected/f91736b1-bf6f-426e-8c0f-cfaac70c16f1-kube-api-access-cdslk\") pod \"router-default-5444994796-n69rk\" (UID: \"f91736b1-bf6f-426e-8c0f-cfaac70c16f1\") " pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.702084 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrcz2\" (UniqueName: \"kubernetes.io/projected/14aac296-ac45-4e74-91c1-069313c31337-kube-api-access-nrcz2\") pod \"catalog-operator-68c6474976-nhz26\" (UID: \"14aac296-ac45-4e74-91c1-069313c31337\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.707658 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.712309 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.718643 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.722756 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4wwf\" (UniqueName: \"kubernetes.io/projected/80ea1e3c-71c9-4fa8-bd21-15e217d09023-kube-api-access-s4wwf\") pod \"packageserver-d55dfcdfc-rmvrl\" (UID: \"80ea1e3c-71c9-4fa8-bd21-15e217d09023\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.743468 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2hlql\" (UniqueName: \"kubernetes.io/projected/350b2f42-3a86-4113-9dd2-bfe644158993-kube-api-access-2hlql\") pod \"service-ca-9c57cc56f-kfxs6\" (UID: \"350b2f42-3a86-4113-9dd2-bfe644158993\") " pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.761490 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lsvv\" (UniqueName: \"kubernetes.io/projected/b6045acf-39a2-42d5-a92f-7ceb260e6e43-kube-api-access-4lsvv\") pod \"machine-config-server-bpv6x\" (UID: \"b6045acf-39a2-42d5-a92f-7ceb260e6e43\") " pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.768710 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.789640 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlrjp\" (UniqueName: \"kubernetes.io/projected/84260b20-4df9-4dea-9524-bd9c18ef7074-kube-api-access-zlrjp\") pod \"marketplace-operator-79b997595-lzlm4\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.792697 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-lj688" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.808970 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fl86f\" (UniqueName: \"kubernetes.io/projected/5935db5e-10d8-40a9-bc7c-102a18d42401-kube-api-access-fl86f\") pod \"machine-config-operator-74547568cd-4vsrd\" (UID: \"5935db5e-10d8-40a9-bc7c-102a18d42401\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.810715 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.816792 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.817507 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.832972 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-q7prd"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.833021 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-s45vs"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.833881 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.834322 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.837989 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.838819 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-d7z7j"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.845196 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psnhx\" (UniqueName: \"kubernetes.io/projected/1af108ab-bba9-4de6-bcbc-601fcba9e197-kube-api-access-psnhx\") pod \"service-ca-operator-777779d784-qtj4l\" (UID: \"1af108ab-bba9-4de6-bcbc-601fcba9e197\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.856683 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.860959 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.861184 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.872873 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rfzcl\" (UniqueName: \"kubernetes.io/projected/399886aa-6188-4575-905d-ae9888853692-kube-api-access-rfzcl\") pod \"ingress-canary-5nggf\" (UID: \"399886aa-6188-4575-905d-ae9888853692\") " pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.874319 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/30774ea6-14da-4a74-9090-797c655dd601-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lmkxt\" (UID: \"30774ea6-14da-4a74-9090-797c655dd601\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.912539 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.914270 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" event={"ID":"d889f5d6-d274-4604-bb80-1529caf804d0","Type":"ContainerStarted","Data":"d717338dfcf249578afc7f182d0db791823282b52ee28bae4d33df0e1b3cb4b9"} Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.914338 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-bpv6x" Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.930042 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-gnspz"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.931125 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d"] Feb 27 16:57:09 crc kubenswrapper[4708]: I0227 16:57:09.938768 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-55dsj"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.001731 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.001771 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-bound-sa-token\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.001791 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-mountpoint-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.001818 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e11dd889-39c0-43fc-aae8-fef332bad5ed-ca-trust-extracted\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.001947 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-plugins-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002047 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-trusted-ca\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002077 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-socket-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002094 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-csi-data-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002120 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-tls\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002138 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-registration-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002210 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e11dd889-39c0-43fc-aae8-fef332bad5ed-installation-pull-secrets\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002233 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d3404a6-1443-4eac-8087-3a89092bf1be-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gmp65\" (UID: \"1d3404a6-1443-4eac-8087-3a89092bf1be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002251 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb8h2\" (UniqueName: \"kubernetes.io/projected/1d3404a6-1443-4eac-8087-3a89092bf1be-kube-api-access-fb8h2\") pod \"multus-admission-controller-857f4d67dd-gmp65\" (UID: \"1d3404a6-1443-4eac-8087-3a89092bf1be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002274 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-certificates\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002293 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mzb9x\" (UniqueName: \"kubernetes.io/projected/13ba4c67-0444-463e-94f9-80da83971df5-kube-api-access-mzb9x\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.002365 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzcvm\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-kube-api-access-tzcvm\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.002938 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:10.502917497 +0000 UTC m=+229.018715084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.025791 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.032070 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.103409 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.103520 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:10.603496242 +0000 UTC m=+229.119293829 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104108 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104130 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-bound-sa-token\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104145 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-mountpoint-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104227 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e11dd889-39c0-43fc-aae8-fef332bad5ed-ca-trust-extracted\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104338 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws54t\" (UniqueName: \"kubernetes.io/projected/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-kube-api-access-ws54t\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104412 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-plugins-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-trusted-ca\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104478 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-socket-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104513 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-csi-data-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104545 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-tls\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104561 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-metrics-tls\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104578 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-registration-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104593 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-config-volume\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104718 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e11dd889-39c0-43fc-aae8-fef332bad5ed-installation-pull-secrets\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104733 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d3404a6-1443-4eac-8087-3a89092bf1be-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gmp65\" (UID: \"1d3404a6-1443-4eac-8087-3a89092bf1be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104758 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb8h2\" (UniqueName: \"kubernetes.io/projected/1d3404a6-1443-4eac-8087-3a89092bf1be-kube-api-access-fb8h2\") pod \"multus-admission-controller-857f4d67dd-gmp65\" (UID: \"1d3404a6-1443-4eac-8087-3a89092bf1be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104801 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-certificates\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104862 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mzb9x\" (UniqueName: \"kubernetes.io/projected/13ba4c67-0444-463e-94f9-80da83971df5-kube-api-access-mzb9x\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.104915 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzcvm\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-kube-api-access-tzcvm\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.105824 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-socket-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.106192 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-csi-data-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.110237 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-registration-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.110924 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-certificates\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.112555 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:10.61253927 +0000 UTC m=+229.128336857 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.112615 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-mountpoint-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.112658 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/13ba4c67-0444-463e-94f9-80da83971df5-plugins-dir\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.114069 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e11dd889-39c0-43fc-aae8-fef332bad5ed-ca-trust-extracted\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.114404 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-trusted-ca\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.127683 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/1d3404a6-1443-4eac-8087-3a89092bf1be-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gmp65\" (UID: \"1d3404a6-1443-4eac-8087-3a89092bf1be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.132972 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-tls\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.135070 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e11dd889-39c0-43fc-aae8-fef332bad5ed-installation-pull-secrets\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.137107 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.144506 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb8h2\" (UniqueName: \"kubernetes.io/projected/1d3404a6-1443-4eac-8087-3a89092bf1be-kube-api-access-fb8h2\") pod \"multus-admission-controller-857f4d67dd-gmp65\" (UID: \"1d3404a6-1443-4eac-8087-3a89092bf1be\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.145310 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.167387 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-bound-sa-token\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.170091 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-5nggf" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.191347 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mzb9x\" (UniqueName: \"kubernetes.io/projected/13ba4c67-0444-463e-94f9-80da83971df5-kube-api-access-mzb9x\") pod \"csi-hostpathplugin-8zjdt\" (UID: \"13ba4c67-0444-463e-94f9-80da83971df5\") " pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.206406 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.206601 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws54t\" (UniqueName: \"kubernetes.io/projected/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-kube-api-access-ws54t\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.206650 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-metrics-tls\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.206668 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-config-volume\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.207368 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-config-volume\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.207545 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:10.707427725 +0000 UTC m=+229.223225312 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.209716 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzcvm\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-kube-api-access-tzcvm\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.223243 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-metrics-tls\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.244897 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws54t\" (UniqueName: \"kubernetes.io/projected/67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82-kube-api-access-ws54t\") pod \"dns-default-qvsn8\" (UID: \"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82\") " pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.309000 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.313526 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:10.813504315 +0000 UTC m=+229.329301902 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.346554 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.346587 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.346596 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-m8pjn"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.346605 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-bhsw7"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.346615 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km9ss"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.377817 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-cl8l9"] Feb 27 16:57:10 crc kubenswrapper[4708]: W0227 16:57:10.380557 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5af2f048_e8b4_449c_8c5d_e4c81f2437d4.slice/crio-a4bf6da32c1c15575480b9107426242b5aab796342e2ba61d235d9641b99bbbb WatchSource:0}: Error finding container a4bf6da32c1c15575480b9107426242b5aab796342e2ba61d235d9641b99bbbb: Status 404 returned error can't find the container with id a4bf6da32c1c15575480b9107426242b5aab796342e2ba61d235d9641b99bbbb Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.410471 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.411475 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:10.911461352 +0000 UTC m=+229.427258939 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.468428 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-nl25w"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.481444 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.507866 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.512529 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.512862 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.012834278 +0000 UTC m=+229.528631865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.526198 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.580691 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.600211 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.600257 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.601939 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.611767 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.615349 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.615603 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.11558476 +0000 UTC m=+229.631382347 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.615673 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.616084 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.116065503 +0000 UTC m=+229.631863080 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.629968 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.637010 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.716150 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.716431 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.216417972 +0000 UTC m=+229.732215559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.765031 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-lj688"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.773432 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.779955 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.796081 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.806087 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lzlm4"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.806158 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-kfxs6"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.810748 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.824604 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.825274 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.325255425 +0000 UTC m=+229.841053012 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.825907 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.833438 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd"] Feb 27 16:57:10 crc kubenswrapper[4708]: W0227 16:57:10.849049 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc8c016d5_5c1f_4680_a678_8568d218617e.slice/crio-d016c361f638fa2f9f6f5815497606dba972a6bb048115c899b00f3cee0f2f00 WatchSource:0}: Error finding container d016c361f638fa2f9f6f5815497606dba972a6bb048115c899b00f3cee0f2f00: Status 404 returned error can't find the container with id d016c361f638fa2f9f6f5815497606dba972a6bb048115c899b00f3cee0f2f00 Feb 27 16:57:10 crc kubenswrapper[4708]: W0227 16:57:10.858054 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc050b374_23f2_4a98_af19_fee47a82a879.slice/crio-fd9cc0e40c022090b7505ce8a852dfb25d30f954162669ab7c89d72f783fb85f WatchSource:0}: Error finding container fd9cc0e40c022090b7505ce8a852dfb25d30f954162669ab7c89d72f783fb85f: Status 404 returned error can't find the container with id fd9cc0e40c022090b7505ce8a852dfb25d30f954162669ab7c89d72f783fb85f Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.858917 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 16:57:10 crc kubenswrapper[4708]: W0227 16:57:10.895542 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c8da5c9_ed4d_480d_99fe_f43c05ea9cd8.slice/crio-8984c76aacd03a6d10a6e3219d1460a049afb0080934c884bc5a2e35c231cb0d WatchSource:0}: Error finding container 8984c76aacd03a6d10a6e3219d1460a049afb0080934c884bc5a2e35c231cb0d: Status 404 returned error can't find the container with id 8984c76aacd03a6d10a6e3219d1460a049afb0080934c884bc5a2e35c231cb0d Feb 27 16:57:10 crc kubenswrapper[4708]: W0227 16:57:10.905586 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5935db5e_10d8_40a9_bc7c_102a18d42401.slice/crio-3298ee6da2b5861f0af9533c186a5053520405a821def9ea47eee7e1fbe0da4b WatchSource:0}: Error finding container 3298ee6da2b5861f0af9533c186a5053520405a821def9ea47eee7e1fbe0da4b: Status 404 returned error can't find the container with id 3298ee6da2b5861f0af9533c186a5053520405a821def9ea47eee7e1fbe0da4b Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.908931 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l"] Feb 27 16:57:10 crc kubenswrapper[4708]: W0227 16:57:10.910008 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b4d3cb_4de9_4fb8_870b_ed3e9760c2e8.slice/crio-ff1ac14584f98a88e98101a5d798dc6bd81eb5a3f4fa37f3a3686e6d73f7a1c5 WatchSource:0}: Error finding container ff1ac14584f98a88e98101a5d798dc6bd81eb5a3f4fa37f3a3686e6d73f7a1c5: Status 404 returned error can't find the container with id ff1ac14584f98a88e98101a5d798dc6bd81eb5a3f4fa37f3a3686e6d73f7a1c5 Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.927876 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:10 crc kubenswrapper[4708]: E0227 16:57:10.928157 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.428143621 +0000 UTC m=+229.943941198 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.947175 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-5nggf"] Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.953270 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" event={"ID":"80ea1e3c-71c9-4fa8-bd21-15e217d09023","Type":"ContainerStarted","Data":"cec7b335474cd3330b26e4b0ed6d4be138908b88494cc05c0e92c3409bddeb91"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.953897 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" event={"ID":"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8","Type":"ContainerStarted","Data":"ff1ac14584f98a88e98101a5d798dc6bd81eb5a3f4fa37f3a3686e6d73f7a1c5"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.962052 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" event={"ID":"855a2824-4e4a-4eae-9e71-3bc0db42f169","Type":"ContainerStarted","Data":"393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.962086 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" event={"ID":"855a2824-4e4a-4eae-9e71-3bc0db42f169","Type":"ContainerStarted","Data":"0a3ee0ac6f8bd718e5642b37d80ba525079a7af49a83c11dc9a90b34c62a1e3c"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.962898 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.965357 4708 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-gxppm container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.965697 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" podUID="855a2824-4e4a-4eae-9e71-3bc0db42f169" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.973586 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cl8l9" event={"ID":"bd7c826a-ca70-4d4f-90ca-96f0b72c173a","Type":"ContainerStarted","Data":"738361c2651fbe219fc21eeca0a247a3e39bc9bf378f5d8fd9cd42cf55eedda0"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.981817 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" event={"ID":"3de1e003-2dee-4d76-86cd-cd60680535bd","Type":"ContainerStarted","Data":"78d34cb0d36361901ea445f033ed5cd63a907eb75ca6a4b011212a0584b7650a"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.982048 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.983959 4708 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-55dsj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.984010 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.984802 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" event={"ID":"5af2f048-e8b4-449c-8c5d-e4c81f2437d4","Type":"ContainerStarted","Data":"a4bf6da32c1c15575480b9107426242b5aab796342e2ba61d235d9641b99bbbb"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.986406 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bhsw7" event={"ID":"3bbf873e-72f0-4743-a2bc-4866dd8b8f86","Type":"ContainerStarted","Data":"29131d82f2d0605c57f1165b465a3adef620f89066378360cfe77f7e7b3a0092"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.995628 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-n69rk" event={"ID":"f91736b1-bf6f-426e-8c0f-cfaac70c16f1","Type":"ContainerStarted","Data":"b33549f27f4424dfe162f406d9eaa358fc31f7d8a2aae2938064e18e084d12a7"} Feb 27 16:57:10 crc kubenswrapper[4708]: I0227 16:57:10.995651 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-n69rk" event={"ID":"f91736b1-bf6f-426e-8c0f-cfaac70c16f1","Type":"ContainerStarted","Data":"9f80c616a1448942edc5d784203b68012fccdeebd7dd2725bce3cf0b17866069"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.004932 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536856-lj688" event={"ID":"c8c016d5-5c1f-4680-a678-8568d218617e","Type":"ContainerStarted","Data":"d016c361f638fa2f9f6f5815497606dba972a6bb048115c899b00f3cee0f2f00"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.006120 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gmp65"] Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.007112 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" event={"ID":"14aac296-ac45-4e74-91c1-069313c31337","Type":"ContainerStarted","Data":"b9b37835a1f07d84bd90de5c091df5f771cbb5e4bd85dd5bd78e85fb28e0ab1a"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.013303 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" event={"ID":"1284b6e4-1c2c-443e-b18d-163396ede328","Type":"ContainerStarted","Data":"a8f01e7af3e88c8f59248409dbd41d37754ddfccda0e0f2944ffb70cfed48674"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.013332 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" event={"ID":"1284b6e4-1c2c-443e-b18d-163396ede328","Type":"ContainerStarted","Data":"32e99d0a63839f70764fc4eeb5774865e858a315aff248e02a79a02d776f142a"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.016296 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" event={"ID":"d889f5d6-d274-4604-bb80-1529caf804d0","Type":"ContainerStarted","Data":"ff368ac06953a400db390196b755dd9b9705224c1f75c6159ea6b93cede31ccb"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.026757 4708 generic.go:334] "Generic (PLEG): container finished" podID="b5731e3c-f903-4516-8c08-43113e79a4ba" containerID="ffb86c5bab0a1b5e21609f2e1bb2eedcc111118088b2ec750301099178d4d31b" exitCode=0 Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.027214 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" event={"ID":"b5731e3c-f903-4516-8c08-43113e79a4ba","Type":"ContainerDied","Data":"ffb86c5bab0a1b5e21609f2e1bb2eedcc111118088b2ec750301099178d4d31b"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.027243 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" event={"ID":"b5731e3c-f903-4516-8c08-43113e79a4ba","Type":"ContainerStarted","Data":"64d36a5a3f57515993636c296f1afd0a854b43347e1459aa35830c67d625bb8f"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.028770 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.029239 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.529225009 +0000 UTC m=+230.045022586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.035640 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" event={"ID":"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18","Type":"ContainerStarted","Data":"f6e0d047c8dd812f6d9eed1068f83a6880baab855b03262da52e71eefe26e21d"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.044093 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" event={"ID":"b710111d-81c5-463d-b2ea-f7f3f5e27b90","Type":"ContainerStarted","Data":"f1b32010ae29eae2dffa2eb65b12bccd6c11a4185cf69ca56044446db37c7ab0"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.044918 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.046385 4708 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-km9ss container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.046418 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" podUID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.046605 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" event={"ID":"f6f38275-7eca-41e7-81a7-0bc5233ba757","Type":"ContainerStarted","Data":"e1e2109a339a602c89ab32f966ba2e8c3f27b33c440a8ef513b2dd318075153e"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.051144 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" event={"ID":"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b","Type":"ContainerStarted","Data":"63e4c77692b3bac73b529a37b0a8702ad3e1dd9167569e8cad3ff0f751a399db"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.054350 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" event={"ID":"3e664390-b33c-4aa5-972c-732c8ca37fda","Type":"ContainerStarted","Data":"923c333285e840d2073a448f3e7d5e99d4f429566756013d7e4e98b4b6db8c5d"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.057966 4708 generic.go:334] "Generic (PLEG): container finished" podID="96b1d3f2-9f87-4beb-9e2c-e6006fa90e65" containerID="2873a6608a2c63ed9f9442e0e88056cab0b1d6567044588e12569af724d71898" exitCode=0 Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.058015 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" event={"ID":"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65","Type":"ContainerDied","Data":"2873a6608a2c63ed9f9442e0e88056cab0b1d6567044588e12569af724d71898"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.058033 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" event={"ID":"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65","Type":"ContainerStarted","Data":"c8cc345a74ede982a424b381505af9b8ca246f08f11e8735bae674cc64e672cf"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.065134 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qvsn8"] Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.065969 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" event={"ID":"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8","Type":"ContainerStarted","Data":"8984c76aacd03a6d10a6e3219d1460a049afb0080934c884bc5a2e35c231cb0d"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.067279 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" event={"ID":"e1d71bdc-a8da-44da-a448-8ee75981e31c","Type":"ContainerStarted","Data":"831cfd0db733f4edef60ff8a1672f070adbf68a0b155a06ed4c121c3dc4f5d35"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.068000 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" event={"ID":"30774ea6-14da-4a74-9090-797c655dd601","Type":"ContainerStarted","Data":"920aff776a4e4401307bcd08751e5e9e473079c032328ce43fa490c6b60e4576"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.069553 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" event={"ID":"0951d4d1-034f-4968-b8ca-a5016d5b38d6","Type":"ContainerStarted","Data":"a4db7e7473c4baeb2f1ccbbba642a1e51f11b9916264ff451235838d8f624bed"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.069585 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" event={"ID":"0951d4d1-034f-4968-b8ca-a5016d5b38d6","Type":"ContainerStarted","Data":"0a1c15e0aa77acc9494dd4192b8a5f445fc01d00a87a55fbd7b3982ee60f640a"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.070324 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" event={"ID":"f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624","Type":"ContainerStarted","Data":"38a4cdba28b960d938389bf623f8a1d45dcb09a3f72cd01ef53ecc2ff486eec4"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.073107 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" event={"ID":"e29ddaa7-6347-4254-bec7-d84e84cd57bd","Type":"ContainerStarted","Data":"3b584e0fc7b2a510298a71720a12ca561102a9d1f35ceb2a07d7c4598ba92003"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.074487 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bpv6x" event={"ID":"b6045acf-39a2-42d5-a92f-7ceb260e6e43","Type":"ContainerStarted","Data":"396e1232d7319585a8b08b9f90aa853b8870bc5288b74f4593a8e1947dd32359"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.074511 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-bpv6x" event={"ID":"b6045acf-39a2-42d5-a92f-7ceb260e6e43","Type":"ContainerStarted","Data":"91d36d71d999d4a59dafc5b7bc042eb37cdd7f91dcdc6828d6d34f19aae14ae4"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.079569 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" event={"ID":"91805ba9-a3ff-4470-9302-cc2de796c19a","Type":"ContainerStarted","Data":"3d2cd7fe32b523dd3fd16c0183bce1501bab2ea775e0ecdc64b9d1795132048a"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.079595 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" event={"ID":"91805ba9-a3ff-4470-9302-cc2de796c19a","Type":"ContainerStarted","Data":"ba081a39c50806acbe719725d7aab64a73004498b3c65ffc468a877c3a2f0d76"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.080377 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.082476 4708 patch_prober.go:28] interesting pod/console-operator-58897d9998-m8pjn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" start-of-body= Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.082517 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" podUID="91805ba9-a3ff-4470-9302-cc2de796c19a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": dial tcp 10.217.0.18:8443: connect: connection refused" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.083237 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" event={"ID":"26d12a6e-d830-4357-b372-9163d663448f","Type":"ContainerStarted","Data":"34e23e6b326d454a1a211e958adca5344e142c94a6f4d2c1dca73d2daf6d68f8"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.083296 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" event={"ID":"26d12a6e-d830-4357-b372-9163d663448f","Type":"ContainerStarted","Data":"ef3bf2d499c9dfb985da5d4b59ffc5c85eea2306f65993c9c13f1c266aa28bfd"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.085208 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" event={"ID":"f47bdbdf-3cea-4337-be67-8b5f60ac8d09","Type":"ContainerStarted","Data":"d67de38634fb7ec15cfae6dcfa28766e620598391975dc763839396ec033641c"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.085254 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" event={"ID":"f47bdbdf-3cea-4337-be67-8b5f60ac8d09","Type":"ContainerStarted","Data":"893f0ddd0bc6a011f66219faf8bfe20af125e6260a50107ecca785708247339f"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.087493 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" event={"ID":"c050b374-23f2-4a98-af19-fee47a82a879","Type":"ContainerStarted","Data":"fd9cc0e40c022090b7505ce8a852dfb25d30f954162669ab7c89d72f783fb85f"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.094264 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" event={"ID":"aa17085d-69af-43ec-8abe-51906d32cd5f","Type":"ContainerStarted","Data":"59cf0132b28ab5664d2c4568ae18809e22b7f564767ce563ba795330a25a392b"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.095448 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" event={"ID":"ee155f68-76dd-411e-8617-05e452690cdf","Type":"ContainerStarted","Data":"4f82ba120119a96c2961426ed3c01159e1d7a7855cc06a3936207b56053d22b2"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.096576 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" event={"ID":"350b2f42-3a86-4113-9dd2-bfe644158993","Type":"ContainerStarted","Data":"6859f28ca5c1bf2af1586d453641fe2058f84f594bc74dac22914a0b1232d787"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.097693 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" event={"ID":"a334b9f5-9e47-48a5-97e2-481df00ce760","Type":"ContainerStarted","Data":"ed93388e790dfeb7cef42d1247fe797810ebb29214b7e3294318780891a40adb"} Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.131803 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.132128 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.632106335 +0000 UTC m=+230.147903912 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.135255 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-8zjdt"] Feb 27 16:57:11 crc kubenswrapper[4708]: W0227 16:57:11.167716 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67c2f1a9_a175_4ff1_8f8a_17f7ac6eff82.slice/crio-d49b0e2ae612ac7518a23b26bdc385eca3462f5ed9326e057589c4fc4a9b54cc WatchSource:0}: Error finding container d49b0e2ae612ac7518a23b26bdc385eca3462f5ed9326e057589c4fc4a9b54cc: Status 404 returned error can't find the container with id d49b0e2ae612ac7518a23b26bdc385eca3462f5ed9326e057589c4fc4a9b54cc Feb 27 16:57:11 crc kubenswrapper[4708]: W0227 16:57:11.209517 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13ba4c67_0444_463e_94f9_80da83971df5.slice/crio-24ac844ca343c48d9c2d4865c5319db808298779d8536e8721a77260f989e3f6 WatchSource:0}: Error finding container 24ac844ca343c48d9c2d4865c5319db808298779d8536e8721a77260f989e3f6: Status 404 returned error can't find the container with id 24ac844ca343c48d9c2d4865c5319db808298779d8536e8721a77260f989e3f6 Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.236020 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.260601 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.760580794 +0000 UTC m=+230.276378371 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.361188 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.361545 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.861530289 +0000 UTC m=+230.377327876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.462711 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.463279 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:11.963268354 +0000 UTC m=+230.479065941 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.564344 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.564644 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.064630319 +0000 UTC m=+230.580427906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.667758 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.668349 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.168339457 +0000 UTC m=+230.684137044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.758383 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-vcbnb" podStartSLOduration=179.758365695 podStartE2EDuration="2m59.758365695s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:11.719116642 +0000 UTC m=+230.234914229" watchObservedRunningTime="2026-02-27 16:57:11.758365695 +0000 UTC m=+230.274163282" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.771315 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.771611 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.271597953 +0000 UTC m=+230.787395540 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.807520 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" podStartSLOduration=179.807501667 podStartE2EDuration="2m59.807501667s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:11.757208614 +0000 UTC m=+230.273006201" watchObservedRunningTime="2026-02-27 16:57:11.807501667 +0000 UTC m=+230.323299254" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.820023 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.826339 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:11 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:11 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:11 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.826387 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.855804 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" podStartSLOduration=179.855787137 podStartE2EDuration="2m59.855787137s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:11.809620813 +0000 UTC m=+230.325418400" watchObservedRunningTime="2026-02-27 16:57:11.855787137 +0000 UTC m=+230.371584724" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.872401 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.872734 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.372722332 +0000 UTC m=+230.888519919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.884361 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-n69rk" podStartSLOduration=178.884343718 podStartE2EDuration="2m58.884343718s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:11.857459461 +0000 UTC m=+230.373257048" watchObservedRunningTime="2026-02-27 16:57:11.884343718 +0000 UTC m=+230.400141305" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.885215 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-q7prd" podStartSLOduration=179.885209511 podStartE2EDuration="2m59.885209511s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:11.883698811 +0000 UTC m=+230.399496398" watchObservedRunningTime="2026-02-27 16:57:11.885209511 +0000 UTC m=+230.401007098" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.933894 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" podStartSLOduration=179.933878661 podStartE2EDuration="2m59.933878661s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:11.932116554 +0000 UTC m=+230.447914141" watchObservedRunningTime="2026-02-27 16:57:11.933878661 +0000 UTC m=+230.449676248" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.973087 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" podStartSLOduration=179.973072412 podStartE2EDuration="2m59.973072412s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:11.970138044 +0000 UTC m=+230.485935631" watchObservedRunningTime="2026-02-27 16:57:11.973072412 +0000 UTC m=+230.488869999" Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.974081 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.974205 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.474194071 +0000 UTC m=+230.989991658 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:11 crc kubenswrapper[4708]: I0227 16:57:11.976349 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:11 crc kubenswrapper[4708]: E0227 16:57:11.976739 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.476730168 +0000 UTC m=+230.992527755 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.014461 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" podStartSLOduration=179.01444148 podStartE2EDuration="2m59.01444148s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.01330123 +0000 UTC m=+230.529098817" watchObservedRunningTime="2026-02-27 16:57:12.01444148 +0000 UTC m=+230.530239067" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.078793 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.079320 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.579296115 +0000 UTC m=+231.095093702 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.079639 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.080043 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.580034935 +0000 UTC m=+231.095832522 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.093015 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-bpv6x" podStartSLOduration=5.092997186 podStartE2EDuration="5.092997186s" podCreationTimestamp="2026-02-27 16:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.090433258 +0000 UTC m=+230.606230845" watchObservedRunningTime="2026-02-27 16:57:12.092997186 +0000 UTC m=+230.608794773" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.180889 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.181428 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.681404871 +0000 UTC m=+231.197202458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.184924 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" event={"ID":"5af2f048-e8b4-449c-8c5d-e4c81f2437d4","Type":"ContainerStarted","Data":"2192a12133d5ad2d4422de087df6f374ea3001a1265d129ee35f12c5e4f18580"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.195747 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cl8l9" event={"ID":"bd7c826a-ca70-4d4f-90ca-96f0b72c173a","Type":"ContainerStarted","Data":"431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.210288 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-4nnfn" podStartSLOduration=179.21027166 podStartE2EDuration="2m59.21027166s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.205495334 +0000 UTC m=+230.721292921" watchObservedRunningTime="2026-02-27 16:57:12.21027166 +0000 UTC m=+230.726069247" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.222042 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" event={"ID":"d889f5d6-d274-4604-bb80-1529caf804d0","Type":"ContainerStarted","Data":"48aea928c226936567c8dd852bdce6de2e89478b3687297650ffe69fabeb90eb"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.245208 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-cl8l9" podStartSLOduration=180.245191559 podStartE2EDuration="3m0.245191559s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.243491804 +0000 UTC m=+230.759289391" watchObservedRunningTime="2026-02-27 16:57:12.245191559 +0000 UTC m=+230.760989146" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.283767 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.285072 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.785055777 +0000 UTC m=+231.300853364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.286341 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" event={"ID":"f6f38275-7eca-41e7-81a7-0bc5233ba757","Type":"ContainerStarted","Data":"e5140eb823fc85b520ef2a878ac0670ac4a366426deafa38c341ce819e59e776"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.300869 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" event={"ID":"e1d71bdc-a8da-44da-a448-8ee75981e31c","Type":"ContainerStarted","Data":"4f3c17a50132705ed82dd625cd1df519a006d4e8f34ea92e2c3efdf83cc3125d"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.306560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" event={"ID":"13ba4c67-0444-463e-94f9-80da83971df5","Type":"ContainerStarted","Data":"24ac844ca343c48d9c2d4865c5319db808298779d8536e8721a77260f989e3f6"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.315380 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-bhsw7" event={"ID":"3bbf873e-72f0-4743-a2bc-4866dd8b8f86","Type":"ContainerStarted","Data":"3188ab6cb6b828f976d7c932272f2a6bbac425904709025048da1e278d313fa0"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.316489 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-bhsw7" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.317681 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" event={"ID":"e29ddaa7-6347-4254-bec7-d84e84cd57bd","Type":"ContainerStarted","Data":"5b8ace69f851f398e879f5d114c837a08c9447a7508ea97ba606edac2b902bce"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.325051 4708 patch_prober.go:28] interesting pod/downloads-7954f5f757-bhsw7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.325189 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bhsw7" podUID="3bbf873e-72f0-4743-a2bc-4866dd8b8f86" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.326889 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-95bz9" podStartSLOduration=180.326870477 podStartE2EDuration="3m0.326870477s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.325892221 +0000 UTC m=+230.841689808" watchObservedRunningTime="2026-02-27 16:57:12.326870477 +0000 UTC m=+230.842668064" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.339192 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" event={"ID":"30774ea6-14da-4a74-9090-797c655dd601","Type":"ContainerStarted","Data":"47d33aecc557e0c4cd50c0662e84cff1726aa7d207dade155e0a28ef86be1440"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.375393 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvsn8" event={"ID":"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82","Type":"ContainerStarted","Data":"d49b0e2ae612ac7518a23b26bdc385eca3462f5ed9326e057589c4fc4a9b54cc"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.382876 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" event={"ID":"c050b374-23f2-4a98-af19-fee47a82a879","Type":"ContainerStarted","Data":"72e7b6e72ba0c34305d0e2ac256e15d2b6f72188fc1e4ac7b387df0451d8d498"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.384116 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.384362 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.884322378 +0000 UTC m=+231.400119965 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.384768 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.387434 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.887414889 +0000 UTC m=+231.403212476 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.408980 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5nggf" event={"ID":"399886aa-6188-4575-905d-ae9888853692","Type":"ContainerStarted","Data":"9917e9126e7c6ee406179b4d80609ce41a24feacb428a402b8dab62ca201ab4e"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.409016 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-5nggf" event={"ID":"399886aa-6188-4575-905d-ae9888853692","Type":"ContainerStarted","Data":"0c021431b2d6eb2d8d9cd4965a0ce344bc0ee5380077fed00c8e0702bde7157c"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.419350 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" event={"ID":"d5929ffa-b478-440c-8efe-bad4b8f21e4e","Type":"ContainerStarted","Data":"45ca0d2253e2712a8ea5bd1417f9b367dfaeae95269232d75ffa3ab9dd372e48"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.419405 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" event={"ID":"d5929ffa-b478-440c-8efe-bad4b8f21e4e","Type":"ContainerStarted","Data":"81c4307dcd2918078820346bf9b734f00e20e2f041eff363790681fdafa42248"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.421790 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" event={"ID":"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8","Type":"ContainerStarted","Data":"4ce39a6b76f4cac0efac3acc6604d6e65224443dba4cd35b9511bfb49e98cccb"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.421834 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" event={"ID":"f4b4d3cb-4de9-4fb8-870b-ed3e9760c2e8","Type":"ContainerStarted","Data":"0f1edd177fdd82fb01de3ef2a18cfb706b96af605c28b00598f9576b11b5335a"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.438305 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" event={"ID":"3e664390-b33c-4aa5-972c-732c8ca37fda","Type":"ContainerStarted","Data":"ba9c8bc00a09553d45975e7b118d10ba35aab47940cd66b23c16e25902200b28"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.441900 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" event={"ID":"1d3404a6-1443-4eac-8087-3a89092bf1be","Type":"ContainerStarted","Data":"a14531c607a48e3101475a835859e9a7298039bb4eb0a150ff06ad4ac42fc57e"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.446663 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" event={"ID":"350b2f42-3a86-4113-9dd2-bfe644158993","Type":"ContainerStarted","Data":"ce9ac56ffafa4eda6064156cd06698425743c6d9f5e1da2165e650092519c7eb"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.450668 4708 generic.go:334] "Generic (PLEG): container finished" podID="0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18" containerID="0ef8977abbc0bd55cdf635db9a743708337b826408b862939bbce5d033790210" exitCode=0 Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.450720 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" event={"ID":"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18","Type":"ContainerDied","Data":"0ef8977abbc0bd55cdf635db9a743708337b826408b862939bbce5d033790210"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.454061 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" event={"ID":"f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624","Type":"ContainerStarted","Data":"01cfed491b928386a3dba84fe7247ba551f66d6f31e85fdb32ca6c26d099e789"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.454124 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" event={"ID":"f9d4819e-1f9b-43dc-9ef6-96fdb3f9c624","Type":"ContainerStarted","Data":"19bf6427cbbb1afc6cc5ce333b529493b1ac25e1594940bddfdd213f089e2e9a"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.457313 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" event={"ID":"5935db5e-10d8-40a9-bc7c-102a18d42401","Type":"ContainerStarted","Data":"2b05bc13a7866b79f10b8b568db2662d928a756c44e7409187fbc5262fb93cc8"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.457333 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" event={"ID":"5935db5e-10d8-40a9-bc7c-102a18d42401","Type":"ContainerStarted","Data":"02af1b5cdff780fb59580d0eb29aa36919b126a4f61fc4ead1e2b5bd16bfa333"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.457344 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" event={"ID":"5935db5e-10d8-40a9-bc7c-102a18d42401","Type":"ContainerStarted","Data":"3298ee6da2b5861f0af9533c186a5053520405a821def9ea47eee7e1fbe0da4b"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.466156 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" event={"ID":"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b","Type":"ContainerStarted","Data":"b4ee28e90d6d83dd2e89c75db48ad17da2e4e0faa7ca4c9f8bb80836b1930a17"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.466204 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" event={"ID":"bf7d946e-7a0a-4d26-b3bb-ba0eb988994b","Type":"ContainerStarted","Data":"79494e97f50bc3f957ee772b4c0b107d8bf77a5eec5b79c997fe09c9962fa841"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.475465 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" event={"ID":"3de1e003-2dee-4d76-86cd-cd60680535bd","Type":"ContainerStarted","Data":"ff32f41d589b3510c77a1e0b24957c36d285c8497a8287c361be67df1b90dc23"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.476517 4708 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-55dsj container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" start-of-body= Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.476545 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.17:6443/healthz\": dial tcp 10.217.0.17:6443: connect: connection refused" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.478539 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" event={"ID":"0c8da5c9-ed4d-480d-99fe-f43c05ea9cd8","Type":"ContainerStarted","Data":"6b123ba0c6dcfe5c7fdc247a9bc231d4423d18c95bc46109127fc0342394f72e"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.489553 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.491090 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:12.991068225 +0000 UTC m=+231.506865812 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.493726 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" event={"ID":"ee155f68-76dd-411e-8617-05e452690cdf","Type":"ContainerStarted","Data":"28ebdadf4bbe54cc37061b05e58db1e0ae23faf82f6292ede0d67374c803b87c"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.494299 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.514649 4708 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-qpj27 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" start-of-body= Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.514735 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" podUID="ee155f68-76dd-411e-8617-05e452690cdf" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.32:8443/healthz\": dial tcp 10.217.0.32:8443: connect: connection refused" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.516573 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" podStartSLOduration=180.516553625 podStartE2EDuration="3m0.516553625s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.481052672 +0000 UTC m=+230.996850259" watchObservedRunningTime="2026-02-27 16:57:12.516553625 +0000 UTC m=+231.032351212" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.517013 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-jn4h4" podStartSLOduration=179.517009547 podStartE2EDuration="2m59.517009547s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.515319573 +0000 UTC m=+231.031117160" watchObservedRunningTime="2026-02-27 16:57:12.517009547 +0000 UTC m=+231.032807134" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.549004 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-5nggf" podStartSLOduration=6.548970528 podStartE2EDuration="6.548970528s" podCreationTimestamp="2026-02-27 16:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.547312224 +0000 UTC m=+231.063109811" watchObservedRunningTime="2026-02-27 16:57:12.548970528 +0000 UTC m=+231.064768115" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.560534 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" event={"ID":"b5731e3c-f903-4516-8c08-43113e79a4ba","Type":"ContainerStarted","Data":"2f989d422f778c9f0d006079a9de989a2d68301aa9df40710cd704ca969f52c1"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.561046 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.591575 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.592384 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.09237264 +0000 UTC m=+231.608170217 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.609506 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-pwk5v" podStartSLOduration=180.60948687 podStartE2EDuration="3m0.60948687s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.597439153 +0000 UTC m=+231.113236740" watchObservedRunningTime="2026-02-27 16:57:12.60948687 +0000 UTC m=+231.125284457" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.627294 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" event={"ID":"a334b9f5-9e47-48a5-97e2-481df00ce760","Type":"ContainerStarted","Data":"6d4139deb5a97f599d40570883e5b6b0c478fe6cf415c19063cc1d8fcb2ac1a0"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.628125 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lmkxt" podStartSLOduration=179.6281138 podStartE2EDuration="2m59.6281138s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.627518434 +0000 UTC m=+231.143316021" watchObservedRunningTime="2026-02-27 16:57:12.6281138 +0000 UTC m=+231.143911387" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.645106 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" event={"ID":"80ea1e3c-71c9-4fa8-bd21-15e217d09023","Type":"ContainerStarted","Data":"7651e18a53bd9855c728d8a40cb6fc746c4c9c98e856811a37c547e5f0207fb2"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.645988 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.647478 4708 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-rmvrl container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.647516 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" podUID="80ea1e3c-71c9-4fa8-bd21-15e217d09023" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.650648 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" event={"ID":"26d12a6e-d830-4357-b372-9163d663448f","Type":"ContainerStarted","Data":"0b7e31cc1c88a7e3194e2f2ce5da313abb3ab3242420bf0654d55cadd67cd89f"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.667814 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-bkpsd" podStartSLOduration=179.667797563 podStartE2EDuration="2m59.667797563s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.663449039 +0000 UTC m=+231.179246626" watchObservedRunningTime="2026-02-27 16:57:12.667797563 +0000 UTC m=+231.183595150" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.690372 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" event={"ID":"b710111d-81c5-463d-b2ea-f7f3f5e27b90","Type":"ContainerStarted","Data":"75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.690987 4708 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-km9ss container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" start-of-body= Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.691029 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" podUID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.30:8443/healthz\": dial tcp 10.217.0.30:8443: connect: connection refused" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.697388 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" event={"ID":"14aac296-ac45-4e74-91c1-069313c31337","Type":"ContainerStarted","Data":"3137568ca6deba3c3f0518a8aaf7ad3d95a461bacb1f0d6a7939bc8ccc7919ce"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.698094 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.720607 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.722138 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.222122232 +0000 UTC m=+231.737919819 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.730127 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" event={"ID":"84260b20-4df9-4dea-9524-bd9c18ef7074","Type":"ContainerStarted","Data":"350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.730176 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" event={"ID":"84260b20-4df9-4dea-9524-bd9c18ef7074","Type":"ContainerStarted","Data":"6e06c4820271a6847317bff3eb94cbb8786261f0368611b49f944cec1b28746b"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.732996 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.760023 4708 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-nhz26 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.760165 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" podUID="14aac296-ac45-4e74-91c1-069313c31337" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.772369 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" event={"ID":"1af108ab-bba9-4de6-bcbc-601fcba9e197","Type":"ContainerStarted","Data":"94428cff272b6f65df0112d47d290f0f891a52360a2c229eed3a963f3bd9a19e"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.775709 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" event={"ID":"aa17085d-69af-43ec-8abe-51906d32cd5f","Type":"ContainerStarted","Data":"da4940811308daa23949d376fdc06900acb70857cd9c4941ddf552a2fa3ce7b5"} Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.778110 4708 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lzlm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.778186 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.796421 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.826423 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.833546 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.333528792 +0000 UTC m=+231.849326379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.836099 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:12 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:12 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:12 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.836149 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.850048 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-bhsw7" podStartSLOduration=180.850026536 podStartE2EDuration="3m0.850026536s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.729613739 +0000 UTC m=+231.245411326" watchObservedRunningTime="2026-02-27 16:57:12.850026536 +0000 UTC m=+231.365824123" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.932242 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:12 crc kubenswrapper[4708]: E0227 16:57:12.932507 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.432492265 +0000 UTC m=+231.948289852 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.953520 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tthbr" podStartSLOduration=179.953497897 podStartE2EDuration="2m59.953497897s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.852288545 +0000 UTC m=+231.368086132" watchObservedRunningTime="2026-02-27 16:57:12.953497897 +0000 UTC m=+231.469295484" Feb 27 16:57:12 crc kubenswrapper[4708]: I0227 16:57:12.974559 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-pjlg4" podStartSLOduration=179.974543661 podStartE2EDuration="2m59.974543661s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:12.973982616 +0000 UTC m=+231.489780193" watchObservedRunningTime="2026-02-27 16:57:12.974543661 +0000 UTC m=+231.490341248" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.033310 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.033362 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.033394 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.033421 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.033745 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.033920 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.533905182 +0000 UTC m=+232.049702769 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.043785 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" podStartSLOduration=180.043767331 podStartE2EDuration="3m0.043767331s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.043355121 +0000 UTC m=+231.559152708" watchObservedRunningTime="2026-02-27 16:57:13.043767331 +0000 UTC m=+231.559564918" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.044105 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" podStartSLOduration=180.04410098 podStartE2EDuration="3m0.04410098s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.015039766 +0000 UTC m=+231.530837353" watchObservedRunningTime="2026-02-27 16:57:13.04410098 +0000 UTC m=+231.559898557" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.046643 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.048737 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.067328 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.073186 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-s45vs" podStartSLOduration=180.073174285 podStartE2EDuration="3m0.073174285s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.071511651 +0000 UTC m=+231.587309238" watchObservedRunningTime="2026-02-27 16:57:13.073174285 +0000 UTC m=+231.588971872" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.075259 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33738: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.097641 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" podStartSLOduration=181.097622868 podStartE2EDuration="3m1.097622868s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.096253602 +0000 UTC m=+231.612051189" watchObservedRunningTime="2026-02-27 16:57:13.097622868 +0000 UTC m=+231.613420455" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.134357 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.134604 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.634568259 +0000 UTC m=+232.150365846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.134668 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.135012 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.634999581 +0000 UTC m=+232.150797168 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.146832 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-gnspz" podStartSLOduration=181.146814972 podStartE2EDuration="3m1.146814972s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.145443945 +0000 UTC m=+231.661241532" watchObservedRunningTime="2026-02-27 16:57:13.146814972 +0000 UTC m=+231.662612559" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.147526 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" podStartSLOduration=180.14752263 podStartE2EDuration="3m0.14752263s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.122122172 +0000 UTC m=+231.637919759" watchObservedRunningTime="2026-02-27 16:57:13.14752263 +0000 UTC m=+231.663320217" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.149217 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33754: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.210130 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hwfzq" podStartSLOduration=181.210115066 podStartE2EDuration="3m1.210115066s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.180666152 +0000 UTC m=+231.696463739" watchObservedRunningTime="2026-02-27 16:57:13.210115066 +0000 UTC m=+231.725912643" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.235403 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.235633 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.236682 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.736642444 +0000 UTC m=+232.252440031 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.245048 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.248099 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.252236 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-r5zs6" podStartSLOduration=180.252221784 podStartE2EDuration="3m0.252221784s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.250769766 +0000 UTC m=+231.766567353" watchObservedRunningTime="2026-02-27 16:57:13.252221784 +0000 UTC m=+231.768019361" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.257268 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33768: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.326221 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" podStartSLOduration=180.32620485 podStartE2EDuration="3m0.32620485s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.283905947 +0000 UTC m=+231.799703534" watchObservedRunningTime="2026-02-27 16:57:13.32620485 +0000 UTC m=+231.842002427" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.326622 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-4vsrd" podStartSLOduration=180.32661858 podStartE2EDuration="3m0.32661858s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.325205363 +0000 UTC m=+231.841002950" watchObservedRunningTime="2026-02-27 16:57:13.32661858 +0000 UTC m=+231.842416167" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.338562 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.338605 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.338921 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.838910264 +0000 UTC m=+232.354707851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.341546 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79b58c0b-8d12-4391-999c-9689f9488f46-metrics-certs\") pod \"network-metrics-daemon-4t52p\" (UID: \"79b58c0b-8d12-4391-999c-9689f9488f46\") " pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.355877 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.362601 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33778: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.369322 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-c2nfw" podStartSLOduration=180.369304833 podStartE2EDuration="3m0.369304833s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.367501036 +0000 UTC m=+231.883298623" watchObservedRunningTime="2026-02-27 16:57:13.369304833 +0000 UTC m=+231.885102420" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.418121 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" podStartSLOduration=180.418105957 podStartE2EDuration="3m0.418105957s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.417154602 +0000 UTC m=+231.932952189" watchObservedRunningTime="2026-02-27 16:57:13.418105957 +0000 UTC m=+231.933903544" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.439645 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.439985 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:13.939969202 +0000 UTC m=+232.455766789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.475758 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" podStartSLOduration=180.475743632 podStartE2EDuration="3m0.475743632s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.474275674 +0000 UTC m=+231.990073261" watchObservedRunningTime="2026-02-27 16:57:13.475743632 +0000 UTC m=+231.991541219" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.481349 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33792: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.541323 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.543299 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.043284559 +0000 UTC m=+232.559082146 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.556465 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-4t52p" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.576417 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33804: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.648799 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.650106 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.150076887 +0000 UTC m=+232.665874474 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.651389 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.651734 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.151721551 +0000 UTC m=+232.667519138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.684016 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33812: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.713905 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-kfxs6" podStartSLOduration=180.713886606 podStartE2EDuration="3m0.713886606s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.508246337 +0000 UTC m=+232.024043924" watchObservedRunningTime="2026-02-27 16:57:13.713886606 +0000 UTC m=+232.229684193" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.763676 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.764109 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.264094786 +0000 UTC m=+232.779892363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.796918 4708 patch_prober.go:28] interesting pod/console-operator-58897d9998-m8pjn container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.18:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.796973 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" podUID="91805ba9-a3ff-4470-9302-cc2de796c19a" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.18:8443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.824360 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:13 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:13 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:13 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.824402 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.837664 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvsn8" event={"ID":"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82","Type":"ContainerStarted","Data":"29dd33aafaa835a8baa018b72ba1a2316e7a643c07d9254eb2906034c4cbf386"} Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.837702 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qvsn8" event={"ID":"67c2f1a9-a175-4ff1-8f8a-17f7ac6eff82","Type":"ContainerStarted","Data":"7d26fdfdf9012adf20fb6a7325808ea9575046539e48c635a299c7e6d3c9797a"} Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.838021 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.851748 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33826: no serving certificate available for the kubelet" Feb 27 16:57:13 crc kubenswrapper[4708]: W0227 16:57:13.851906 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b6479f0_333b_4a96_9adf_2099afdc2447.slice/crio-069c9d2e25f9e582e8cc98978f41d17f6f0669babc1e0ca983c8aa3f4a5110bb WatchSource:0}: Error finding container 069c9d2e25f9e582e8cc98978f41d17f6f0669babc1e0ca983c8aa3f4a5110bb: Status 404 returned error can't find the container with id 069c9d2e25f9e582e8cc98978f41d17f6f0669babc1e0ca983c8aa3f4a5110bb Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.864740 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.865049 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.365037871 +0000 UTC m=+232.880835458 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.874790 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-77hdx" event={"ID":"d5929ffa-b478-440c-8efe-bad4b8f21e4e","Type":"ContainerStarted","Data":"e5982b8c6cf3b2f5eac1fd11b2d3b2079a9b757c19778d7288e9874e01a4c358"} Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.888090 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" event={"ID":"e29ddaa7-6347-4254-bec7-d84e84cd57bd","Type":"ContainerStarted","Data":"135ab3dc7af8aec5ce7943cab6ea51ad66553ff7f3f84e84036eb0659953b50f"} Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.903990 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qvsn8" podStartSLOduration=6.903974455 podStartE2EDuration="6.903974455s" podCreationTimestamp="2026-02-27 16:57:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.89540004 +0000 UTC m=+232.411197627" watchObservedRunningTime="2026-02-27 16:57:13.903974455 +0000 UTC m=+232.419772032" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.925409 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" event={"ID":"0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18","Type":"ContainerStarted","Data":"dbcdeb18689630b381a8f37979a80926501e5ff7e2d0ff3a88172661561c0ee9"} Feb 27 16:57:13 crc kubenswrapper[4708]: W0227 16:57:13.963165 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9d751cbb_f2e2_430d_9754_c882a5e924a5.slice/crio-9f36ec5d35cf4065fa4a0f1c9fdb052d3fb31567b606b84a6afb4d261bfdac1e WatchSource:0}: Error finding container 9f36ec5d35cf4065fa4a0f1c9fdb052d3fb31567b606b84a6afb4d261bfdac1e: Status 404 returned error can't find the container with id 9f36ec5d35cf4065fa4a0f1c9fdb052d3fb31567b606b84a6afb4d261bfdac1e Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.963581 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" event={"ID":"a334b9f5-9e47-48a5-97e2-481df00ce760","Type":"ContainerStarted","Data":"d65096048dfae4e5d9177c856af59fa5fe44ccab88b58fbe959804a415741f9c"} Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.964203 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.965704 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:13 crc kubenswrapper[4708]: E0227 16:57:13.966802 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.466787787 +0000 UTC m=+232.982585374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.994110 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" event={"ID":"1d3404a6-1443-4eac-8087-3a89092bf1be","Type":"ContainerStarted","Data":"d94154eaef1e85d87e1bbf461cdb5cbc8f74dfbaa733e48695ce4f06a1182076"} Feb 27 16:57:13 crc kubenswrapper[4708]: I0227 16:57:13.994153 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" event={"ID":"1d3404a6-1443-4eac-8087-3a89092bf1be","Type":"ContainerStarted","Data":"f5f1fc4fc9b31d1ef468557d77d712146a6a80f75c7f6e1a9b12ae4d521040c8"} Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.060584 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-nl25w" podStartSLOduration=182.060567203 podStartE2EDuration="3m2.060567203s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:13.972955899 +0000 UTC m=+232.488753486" watchObservedRunningTime="2026-02-27 16:57:14.060567203 +0000 UTC m=+232.576364790" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.066693 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.068136 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.568124702 +0000 UTC m=+233.083922289 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.081313 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" podStartSLOduration=181.081295469 podStartE2EDuration="3m1.081295469s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:14.078528586 +0000 UTC m=+232.594326173" watchObservedRunningTime="2026-02-27 16:57:14.081295469 +0000 UTC m=+232.597093056" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.087718 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-qtj4l" event={"ID":"1af108ab-bba9-4de6-bcbc-601fcba9e197","Type":"ContainerStarted","Data":"adae86f377caa235c0b5380ba70622f13bd198ff88b39858eacc8edd1a18365c"} Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.130507 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gmp65" podStartSLOduration=181.130488532 podStartE2EDuration="3m1.130488532s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:14.130092612 +0000 UTC m=+232.645890199" watchObservedRunningTime="2026-02-27 16:57:14.130488532 +0000 UTC m=+232.646286119" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.150322 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" event={"ID":"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65","Type":"ContainerStarted","Data":"51cea19e27acc16ba6facb6587c4b69d2d774b02892e3db80923b3da67ae4f6a"} Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.150365 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" event={"ID":"96b1d3f2-9f87-4beb-9e2c-e6006fa90e65","Type":"ContainerStarted","Data":"c50275e832344d1d46da7db7e7e37283836ab1b0be6a5c9b7995fa15dc5f013b"} Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.162001 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" event={"ID":"13ba4c67-0444-463e-94f9-80da83971df5","Type":"ContainerStarted","Data":"67f8f918c525793358dcfbd5c55aa70c77ed264c4f7959582203e8e26954a8f4"} Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.163670 4708 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lzlm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" start-of-body= Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.163702 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": dial tcp 10.217.0.33:8080: connect: connection refused" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.166701 4708 patch_prober.go:28] interesting pod/downloads-7954f5f757-bhsw7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.166745 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bhsw7" podUID="3bbf873e-72f0-4743-a2bc-4866dd8b8f86" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.167270 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.167564 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.667549497 +0000 UTC m=+233.183347084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.168008 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.168381 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.668370109 +0000 UTC m=+233.184167786 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.180384 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-nhz26" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.181664 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-qpj27" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.181739 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.192131 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.272765 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.273300 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.275254 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.775240369 +0000 UTC m=+233.291037946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.287216 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.301102 4708 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-nw84d container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.301159 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" podUID="0fdf33f6-f90e-4c4f-9bb7-d25a16b54e18" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.11:8443/livez\": dial tcp 10.217.0.11:8443: connect: connection refused" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.314396 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" podStartSLOduration=182.314367709 podStartE2EDuration="3m2.314367709s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:14.251046243 +0000 UTC m=+232.766843830" watchObservedRunningTime="2026-02-27 16:57:14.314367709 +0000 UTC m=+232.830165296" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.351116 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-4t52p"] Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.375961 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.376305 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.876293197 +0000 UTC m=+233.392090784 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.400036 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-m8pjn" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.478416 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.478827 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:14.978811914 +0000 UTC m=+233.494609491 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.580078 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.580402 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.080389105 +0000 UTC m=+233.596186692 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.581677 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-qg7fq" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.610720 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33834: no serving certificate available for the kubelet" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.681676 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.682025 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.182008128 +0000 UTC m=+233.697805715 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.705901 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-rmvrl" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.783147 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.783759 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.283746763 +0000 UTC m=+233.799544340 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.824440 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:14 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:14 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:14 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.824510 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.884858 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.885241 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.385226922 +0000 UTC m=+233.901024509 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.912982 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7rtdw"] Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.913888 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.918903 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.944913 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rtdw"] Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.986733 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-catalog-content\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.986792 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-586qv\" (UniqueName: \"kubernetes.io/projected/9b733486-f273-4bd5-afa3-d35d3d1feafc-kube-api-access-586qv\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.986858 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:14 crc kubenswrapper[4708]: I0227 16:57:14.986883 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-utilities\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:14 crc kubenswrapper[4708]: E0227 16:57:14.987155 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.487145223 +0000 UTC m=+234.002942810 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.087431 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.087556 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.587530722 +0000 UTC m=+234.103328309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.087685 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-catalog-content\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.087727 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-586qv\" (UniqueName: \"kubernetes.io/projected/9b733486-f273-4bd5-afa3-d35d3d1feafc-kube-api-access-586qv\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.087775 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.087798 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-utilities\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.088131 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.588118138 +0000 UTC m=+234.103915725 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.088140 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-catalog-content\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.088522 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-utilities\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.123517 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-586qv\" (UniqueName: \"kubernetes.io/projected/9b733486-f273-4bd5-afa3-d35d3d1feafc-kube-api-access-586qv\") pod \"certified-operators-7rtdw\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.187506 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4t52p" event={"ID":"79b58c0b-8d12-4391-999c-9689f9488f46","Type":"ContainerStarted","Data":"9f97d78205a486627b11069b6e5b84dbeaf6f70e66cfeb66bfd23bdfa88536c8"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.187555 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4t52p" event={"ID":"79b58c0b-8d12-4391-999c-9689f9488f46","Type":"ContainerStarted","Data":"3b58f2aff86765ab6f9755396c33241350a20761d75261e594f5583b960cdeb1"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.189087 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.189415 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.689401651 +0000 UTC m=+234.205199238 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.196693 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7fb49188c23e5778dc5149a48f5ac7a93367460155c9f854db1d0ee40e0d9925"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.196733 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"3fe4fe26e310365a26d8d253981d9102d8202f064c558b8f98a46e0a641a6ee8"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.209640 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9acec42891a4f807765ac4b9b5bcc47fa12de25425197cbec205e5d2e6995777"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.209699 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"9f36ec5d35cf4065fa4a0f1c9fdb052d3fb31567b606b84a6afb4d261bfdac1e"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.217812 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"d778279e491d958656744bcc67fef6a3b7be6bee344936ef558d4f07afe5e806"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.218041 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"069c9d2e25f9e582e8cc98978f41d17f6f0669babc1e0ca983c8aa3f4a5110bb"} Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.218825 4708 patch_prober.go:28] interesting pod/downloads-7954f5f757-bhsw7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.218893 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bhsw7" podUID="3bbf873e-72f0-4743-a2bc-4866dd8b8f86" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.246556 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hw5dq"] Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.246599 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.247706 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.290641 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.291430 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.791413894 +0000 UTC m=+234.307211481 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.333622 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hw5dq"] Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.394392 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.394872 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-catalog-content\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.394934 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-utilities\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.395074 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44qz8\" (UniqueName: \"kubernetes.io/projected/70493bd3-d5c2-49e2-bd00-ac98325a2187-kube-api-access-44qz8\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.397017 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:15.896995211 +0000 UTC m=+234.412792798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.429612 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zvqlm"] Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.430537 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.457955 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.497075 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvqlm"] Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.500020 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.500112 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-catalog-content\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.500141 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-utilities\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.500186 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44qz8\" (UniqueName: \"kubernetes.io/projected/70493bd3-d5c2-49e2-bd00-ac98325a2187-kube-api-access-44qz8\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.501012 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.000992386 +0000 UTC m=+234.516789973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.501670 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-catalog-content\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.504907 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-utilities\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.579025 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44qz8\" (UniqueName: \"kubernetes.io/projected/70493bd3-d5c2-49e2-bd00-ac98325a2187-kube-api-access-44qz8\") pod \"certified-operators-hw5dq\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.586763 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.603138 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.603383 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4g9c\" (UniqueName: \"kubernetes.io/projected/5710135c-fd59-4ff6-b74a-ad7ab8730aff-kube-api-access-q4g9c\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.603433 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.10340738 +0000 UTC m=+234.619204967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.603470 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-catalog-content\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.603501 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-utilities\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.603577 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.603962 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.103954644 +0000 UTC m=+234.619752231 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.618893 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ggb2w"] Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.620462 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.649380 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ggb2w"] Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.704378 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.704521 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gncnx\" (UniqueName: \"kubernetes.io/projected/b2d410d4-9144-42b4-96c9-345732131a7e-kube-api-access-gncnx\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.704555 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-catalog-content\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.704576 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-utilities\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.704645 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-utilities\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.704664 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4g9c\" (UniqueName: \"kubernetes.io/projected/5710135c-fd59-4ff6-b74a-ad7ab8730aff-kube-api-access-q4g9c\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.704680 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-catalog-content\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.704771 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.204756165 +0000 UTC m=+234.720553752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.705421 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-catalog-content\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.705618 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-utilities\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.737717 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4g9c\" (UniqueName: \"kubernetes.io/projected/5710135c-fd59-4ff6-b74a-ad7ab8730aff-kube-api-access-q4g9c\") pod \"community-operators-zvqlm\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.804036 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.808674 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-utilities\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.808709 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-catalog-content\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.808738 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gncnx\" (UniqueName: \"kubernetes.io/projected/b2d410d4-9144-42b4-96c9-345732131a7e-kube-api-access-gncnx\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.808780 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.809232 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-utilities\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.809442 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-catalog-content\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.809897 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.3098863 +0000 UTC m=+234.825683887 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.831805 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:15 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:15 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:15 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.831869 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.842666 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gncnx\" (UniqueName: \"kubernetes.io/projected/b2d410d4-9144-42b4-96c9-345732131a7e-kube-api-access-gncnx\") pod \"community-operators-ggb2w\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.913044 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:15 crc kubenswrapper[4708]: E0227 16:57:15.913418 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.413402943 +0000 UTC m=+234.929200530 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.967869 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.991740 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7rtdw"] Feb 27 16:57:15 crc kubenswrapper[4708]: I0227 16:57:15.995228 4708 ???:1] "http: TLS handshake error from 192.168.126.11:33844: no serving certificate available for the kubelet" Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.024970 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.025468 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.52544921 +0000 UTC m=+235.041246807 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.131081 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.131447 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.631433597 +0000 UTC m=+235.147231184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.133832 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hw5dq"] Feb 27 16:57:16 crc kubenswrapper[4708]: W0227 16:57:16.209425 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70493bd3_d5c2_49e2_bd00_ac98325a2187.slice/crio-b5c2b20671590b20ee66a4fc8ed67bc358afc3c305b9d38a96077ef47726b3f0 WatchSource:0}: Error finding container b5c2b20671590b20ee66a4fc8ed67bc358afc3c305b9d38a96077ef47726b3f0: Status 404 returned error can't find the container with id b5c2b20671590b20ee66a4fc8ed67bc358afc3c305b9d38a96077ef47726b3f0 Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.232555 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.232840 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.732827184 +0000 UTC m=+235.248624771 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.275741 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-4t52p" event={"ID":"79b58c0b-8d12-4391-999c-9689f9488f46","Type":"ContainerStarted","Data":"24357642372d3969673dac2e1890add35fb6872b6a0c2621244fccc9f737f57e"} Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.290876 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rtdw" event={"ID":"9b733486-f273-4bd5-afa3-d35d3d1feafc","Type":"ContainerStarted","Data":"3ada598e5866979706ca456a772dd8aa0362eb5a71d8d9b3fdbb646fd59c7aed"} Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.293885 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" event={"ID":"13ba4c67-0444-463e-94f9-80da83971df5","Type":"ContainerStarted","Data":"420a7f836903dae0b5c224025a1a0a94f50a3accf45cf067dba9995454619562"} Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.295666 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw5dq" event={"ID":"70493bd3-d5c2-49e2-bd00-ac98325a2187","Type":"ContainerStarted","Data":"b5c2b20671590b20ee66a4fc8ed67bc358afc3c305b9d38a96077ef47726b3f0"} Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.336234 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.336607 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.836592253 +0000 UTC m=+235.352389840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.439502 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.440795 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:16.940781703 +0000 UTC m=+235.456579290 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.443330 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-4t52p" podStartSLOduration=183.44331384 podStartE2EDuration="3m3.44331384s" podCreationTimestamp="2026-02-27 16:54:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:16.316951916 +0000 UTC m=+234.832749503" watchObservedRunningTime="2026-02-27 16:57:16.44331384 +0000 UTC m=+234.959111427" Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.454036 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km9ss"] Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.454352 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" podUID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" containerName="controller-manager" containerID="cri-o://75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919" gracePeriod=30 Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.454546 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm"] Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.454741 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" podUID="855a2824-4e4a-4eae-9e71-3bc0db42f169" containerName="route-controller-manager" containerID="cri-o://393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35" gracePeriod=30 Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.543393 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.543828 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.043811733 +0000 UTC m=+235.559609310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.556692 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zvqlm"] Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.645483 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.645832 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.145820806 +0000 UTC m=+235.661618393 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.725905 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ggb2w"] Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.750740 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.751114 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.251072574 +0000 UTC m=+235.766870161 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.751210 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.751660 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.251648499 +0000 UTC m=+235.767446086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.755807 4708 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.826347 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:16 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:16 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:16 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.826413 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.853405 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.853681 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.353667372 +0000 UTC m=+235.869464959 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:16 crc kubenswrapper[4708]: I0227 16:57:16.954771 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:16 crc kubenswrapper[4708]: E0227 16:57:16.955418 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.455406018 +0000 UTC m=+235.971203605 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.007956 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p5lwl"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.008897 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.014079 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.028193 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5lwl"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.056347 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.060285 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.560267706 +0000 UTC m=+236.076065293 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.091970 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.096010 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.161933 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq7h6\" (UniqueName: \"kubernetes.io/projected/5c38d70c-968f-44dd-b42b-013bc033debb-kube-api-access-tq7h6\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.161984 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.162045 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-utilities\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.162065 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-catalog-content\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.162339 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.66232874 +0000 UTC m=+236.178126327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.262660 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-client-ca\") pod \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.262777 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-proxy-ca-bundles\") pod \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.262804 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5bd8t\" (UniqueName: \"kubernetes.io/projected/855a2824-4e4a-4eae-9e71-3bc0db42f169-kube-api-access-5bd8t\") pod \"855a2824-4e4a-4eae-9e71-3bc0db42f169\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.262832 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fwgt9\" (UniqueName: \"kubernetes.io/projected/b710111d-81c5-463d-b2ea-f7f3f5e27b90-kube-api-access-fwgt9\") pod \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263099 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263120 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-client-ca\") pod \"855a2824-4e4a-4eae-9e71-3bc0db42f169\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263149 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-config\") pod \"855a2824-4e4a-4eae-9e71-3bc0db42f169\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263187 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b710111d-81c5-463d-b2ea-f7f3f5e27b90-serving-cert\") pod \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263235 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-config\") pod \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\" (UID: \"b710111d-81c5-463d-b2ea-f7f3f5e27b90\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263260 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/855a2824-4e4a-4eae-9e71-3bc0db42f169-serving-cert\") pod \"855a2824-4e4a-4eae-9e71-3bc0db42f169\" (UID: \"855a2824-4e4a-4eae-9e71-3bc0db42f169\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263517 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-utilities\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263544 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-catalog-content\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263574 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tq7h6\" (UniqueName: \"kubernetes.io/projected/5c38d70c-968f-44dd-b42b-013bc033debb-kube-api-access-tq7h6\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.263817 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-client-ca" (OuterVolumeSpecName: "client-ca") pod "b710111d-81c5-463d-b2ea-f7f3f5e27b90" (UID: "b710111d-81c5-463d-b2ea-f7f3f5e27b90"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.264987 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "b710111d-81c5-463d-b2ea-f7f3f5e27b90" (UID: "b710111d-81c5-463d-b2ea-f7f3f5e27b90"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.267019 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.766988033 +0000 UTC m=+236.282785620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.267503 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-client-ca" (OuterVolumeSpecName: "client-ca") pod "855a2824-4e4a-4eae-9e71-3bc0db42f169" (UID: "855a2824-4e4a-4eae-9e71-3bc0db42f169"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.267507 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-config" (OuterVolumeSpecName: "config") pod "b710111d-81c5-463d-b2ea-f7f3f5e27b90" (UID: "b710111d-81c5-463d-b2ea-f7f3f5e27b90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.267929 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-utilities\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.267959 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-catalog-content\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.273086 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-config" (OuterVolumeSpecName: "config") pod "855a2824-4e4a-4eae-9e71-3bc0db42f169" (UID: "855a2824-4e4a-4eae-9e71-3bc0db42f169"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.296199 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b710111d-81c5-463d-b2ea-f7f3f5e27b90-kube-api-access-fwgt9" (OuterVolumeSpecName: "kube-api-access-fwgt9") pod "b710111d-81c5-463d-b2ea-f7f3f5e27b90" (UID: "b710111d-81c5-463d-b2ea-f7f3f5e27b90"). InnerVolumeSpecName "kube-api-access-fwgt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.302103 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/855a2824-4e4a-4eae-9e71-3bc0db42f169-kube-api-access-5bd8t" (OuterVolumeSpecName: "kube-api-access-5bd8t") pod "855a2824-4e4a-4eae-9e71-3bc0db42f169" (UID: "855a2824-4e4a-4eae-9e71-3bc0db42f169"). InnerVolumeSpecName "kube-api-access-5bd8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.303227 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq7h6\" (UniqueName: \"kubernetes.io/projected/5c38d70c-968f-44dd-b42b-013bc033debb-kube-api-access-tq7h6\") pod \"redhat-marketplace-p5lwl\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.303789 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b710111d-81c5-463d-b2ea-f7f3f5e27b90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "b710111d-81c5-463d-b2ea-f7f3f5e27b90" (UID: "b710111d-81c5-463d-b2ea-f7f3f5e27b90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.304115 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/855a2824-4e4a-4eae-9e71-3bc0db42f169-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "855a2824-4e4a-4eae-9e71-3bc0db42f169" (UID: "855a2824-4e4a-4eae-9e71-3bc0db42f169"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.308950 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-678487474-jn4cf"] Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.309227 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" containerName="controller-manager" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.309243 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" containerName="controller-manager" Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.309254 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="855a2824-4e4a-4eae-9e71-3bc0db42f169" containerName="route-controller-manager" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.309261 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="855a2824-4e4a-4eae-9e71-3bc0db42f169" containerName="route-controller-manager" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.309348 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" containerName="controller-manager" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.309361 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="855a2824-4e4a-4eae-9e71-3bc0db42f169" containerName="route-controller-manager" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.309771 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.313687 4708 generic.go:334] "Generic (PLEG): container finished" podID="855a2824-4e4a-4eae-9e71-3bc0db42f169" containerID="393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35" exitCode=0 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.313746 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" event={"ID":"855a2824-4e4a-4eae-9e71-3bc0db42f169","Type":"ContainerDied","Data":"393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.313767 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" event={"ID":"855a2824-4e4a-4eae-9e71-3bc0db42f169","Type":"ContainerDied","Data":"0a3ee0ac6f8bd718e5642b37d80ba525079a7af49a83c11dc9a90b34c62a1e3c"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.313783 4708 scope.go:117] "RemoveContainer" containerID="393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.313908 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.317901 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678487474-jn4cf"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.321350 4708 generic.go:334] "Generic (PLEG): container finished" podID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" containerID="75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919" exitCode=0 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.321438 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" event={"ID":"b710111d-81c5-463d-b2ea-f7f3f5e27b90","Type":"ContainerDied","Data":"75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.321465 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" event={"ID":"b710111d-81c5-463d-b2ea-f7f3f5e27b90","Type":"ContainerDied","Data":"f1b32010ae29eae2dffa2eb65b12bccd6c11a4185cf69ca56044446db37c7ab0"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.321529 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-km9ss" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.339919 4708 generic.go:334] "Generic (PLEG): container finished" podID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerID="73b77b3ba08fba9c5e79d10554c013930ac929e1642a6aec712f7a06a5f693b8" exitCode=0 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.339998 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw5dq" event={"ID":"70493bd3-d5c2-49e2-bd00-ac98325a2187","Type":"ContainerDied","Data":"73b77b3ba08fba9c5e79d10554c013930ac929e1642a6aec712f7a06a5f693b8"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.344687 4708 generic.go:334] "Generic (PLEG): container finished" podID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerID="9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6" exitCode=0 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.344758 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvqlm" event={"ID":"5710135c-fd59-4ff6-b74a-ad7ab8730aff","Type":"ContainerDied","Data":"9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.344784 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvqlm" event={"ID":"5710135c-fd59-4ff6-b74a-ad7ab8730aff","Type":"ContainerStarted","Data":"d83026fe88ba75305ca27a5ee02a909966a26a0637b8e06ab20769a0517bc190"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.345507 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.347027 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.350635 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.351227 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.351324 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.351386 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.351538 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.351546 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.351718 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.359383 4708 generic.go:334] "Generic (PLEG): container finished" podID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerID="dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12" exitCode=0 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.359467 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rtdw" event={"ID":"9b733486-f273-4bd5-afa3-d35d3d1feafc","Type":"ContainerDied","Data":"dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364364 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364476 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364488 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5bd8t\" (UniqueName: \"kubernetes.io/projected/855a2824-4e4a-4eae-9e71-3bc0db42f169-kube-api-access-5bd8t\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364499 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fwgt9\" (UniqueName: \"kubernetes.io/projected/b710111d-81c5-463d-b2ea-f7f3f5e27b90-kube-api-access-fwgt9\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364509 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364518 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/855a2824-4e4a-4eae-9e71-3bc0db42f169-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364528 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b710111d-81c5-463d-b2ea-f7f3f5e27b90-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364536 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364544 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/855a2824-4e4a-4eae-9e71-3bc0db42f169-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.364552 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b710111d-81c5-463d-b2ea-f7f3f5e27b90-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.364807 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.864796225 +0000 UTC m=+236.380593812 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.366255 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" event={"ID":"13ba4c67-0444-463e-94f9-80da83971df5","Type":"ContainerStarted","Data":"1a01d77e047737ba83ba859f98bdd325f41c7b746128b720bc19b4e36adbd23c"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.366286 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" event={"ID":"13ba4c67-0444-463e-94f9-80da83971df5","Type":"ContainerStarted","Data":"b3fedb7507e69f9d2264e09b2f342a34928c79e57bc05441774113841d0ff8b7"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.371993 4708 generic.go:334] "Generic (PLEG): container finished" podID="b2d410d4-9144-42b4-96c9-345732131a7e" containerID="1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0" exitCode=0 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.378810 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggb2w" event={"ID":"b2d410d4-9144-42b4-96c9-345732131a7e","Type":"ContainerDied","Data":"1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.379126 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggb2w" event={"ID":"b2d410d4-9144-42b4-96c9-345732131a7e","Type":"ContainerStarted","Data":"12b27581b23a64ecab17b77c471df8bd69ef75eac8cb3b23635dcff55e9e0a61"} Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.426910 4708 scope.go:117] "RemoveContainer" containerID="393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.428560 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.428974 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35\": container with ID starting with 393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35 not found: ID does not exist" containerID="393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.429017 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35"} err="failed to get container status \"393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35\": rpc error: code = NotFound desc = could not find container \"393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35\": container with ID starting with 393b294e3f360cef03ec5b32796766e160f197157e5df6dcdf50e2f40d8f0a35 not found: ID does not exist" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.429035 4708 scope.go:117] "RemoveContainer" containerID="75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.435752 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xmm5v"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.437451 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.440521 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-8zjdt" podStartSLOduration=11.440487186 podStartE2EDuration="11.440487186s" podCreationTimestamp="2026-02-27 16:57:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:17.421669871 +0000 UTC m=+235.937467448" watchObservedRunningTime="2026-02-27 16:57:17.440487186 +0000 UTC m=+235.956284793" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.452695 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmm5v"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.465964 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.466150 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.96612464 +0000 UTC m=+236.481922227 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.466646 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-config\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.466693 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rq686\" (UniqueName: \"kubernetes.io/projected/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-kube-api-access-rq686\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.466725 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-serving-cert\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.467953 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-serving-cert\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.468264 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-config\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.468411 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-client-ca\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.468488 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-client-ca\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.468571 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.468625 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsmg9\" (UniqueName: \"kubernetes.io/projected/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-kube-api-access-wsmg9\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.468648 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-proxy-ca-bundles\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.469210 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-02-27 16:57:17.96917087 +0000 UTC m=+236.484968457 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-89q5w" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.488539 4708 scope.go:117] "RemoveContainer" containerID="75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.488688 4708 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-02-27T16:57:16.755821219Z","Handler":null,"Name":""} Feb 27 16:57:17 crc kubenswrapper[4708]: E0227 16:57:17.490088 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919\": container with ID starting with 75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919 not found: ID does not exist" containerID="75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.490163 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919"} err="failed to get container status \"75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919\": rpc error: code = NotFound desc = could not find container \"75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919\": container with ID starting with 75664d75da3e61aaeaa791208ad9750ec2947131e51b5212a936bd40918ad919 not found: ID does not exist" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.495137 4708 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.495162 4708 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.495604 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.498974 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-gxppm"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.526905 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km9ss"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.529430 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-km9ss"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.570476 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.570817 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-serving-cert\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.570936 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-serving-cert\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.570986 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-config\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.571011 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-client-ca\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.572378 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-catalog-content\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.572439 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-client-ca\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.572502 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsmg9\" (UniqueName: \"kubernetes.io/projected/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-kube-api-access-wsmg9\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.575581 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-client-ca\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.575713 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-config\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.576350 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-serving-cert\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.576424 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-client-ca\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.572542 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-proxy-ca-bundles\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.576734 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.578305 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-proxy-ca-bundles\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.580304 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-serving-cert\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.582314 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8m5l\" (UniqueName: \"kubernetes.io/projected/b091d644-ad3d-4b63-976d-16e3c0caa3e4-kube-api-access-d8m5l\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.582661 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-utilities\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.582703 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-config\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.582741 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rq686\" (UniqueName: \"kubernetes.io/projected/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-kube-api-access-rq686\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.585145 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-config\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.615744 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsmg9\" (UniqueName: \"kubernetes.io/projected/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-kube-api-access-wsmg9\") pod \"route-controller-manager-54b776fb6d-xzjl5\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.623912 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rq686\" (UniqueName: \"kubernetes.io/projected/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-kube-api-access-rq686\") pod \"controller-manager-678487474-jn4cf\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.686116 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-catalog-content\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.686217 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.686276 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d8m5l\" (UniqueName: \"kubernetes.io/projected/b091d644-ad3d-4b63-976d-16e3c0caa3e4-kube-api-access-d8m5l\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.686307 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-utilities\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.686610 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-catalog-content\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.687071 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-utilities\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.694833 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.694881 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.695746 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5lwl"] Feb 27 16:57:17 crc kubenswrapper[4708]: W0227 16:57:17.704904 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c38d70c_968f_44dd_b42b_013bc033debb.slice/crio-8b65fd6ba1f80c3c40ee28b6e921689ffa5e1afd03e6422ab1d750d75b886657 WatchSource:0}: Error finding container 8b65fd6ba1f80c3c40ee28b6e921689ffa5e1afd03e6422ab1d750d75b886657: Status 404 returned error can't find the container with id 8b65fd6ba1f80c3c40ee28b6e921689ffa5e1afd03e6422ab1d750d75b886657 Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.705934 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8m5l\" (UniqueName: \"kubernetes.io/projected/b091d644-ad3d-4b63-976d-16e3c0caa3e4-kube-api-access-d8m5l\") pod \"redhat-marketplace-xmm5v\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.723043 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.729674 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-89q5w\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.736307 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.763415 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.781975 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.782625 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.786244 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.786424 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.789877 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.822041 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:17 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:17 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:17 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.822688 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.891583 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dbc9e2-dda0-4089-b070-bb06b8369491-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.891659 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dbc9e2-dda0-4089-b070-bb06b8369491-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.900917 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.993001 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dbc9e2-dda0-4089-b070-bb06b8369491-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.993104 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dbc9e2-dda0-4089-b070-bb06b8369491-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:17 crc kubenswrapper[4708]: I0227 16:57:17.993191 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dbc9e2-dda0-4089-b070-bb06b8369491-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.015330 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dbc9e2-dda0-4089-b070-bb06b8369491-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.049806 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5"] Feb 27 16:57:18 crc kubenswrapper[4708]: W0227 16:57:18.060474 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75da6fa3_efd9_4f21_a7bc_ec0db67ed26c.slice/crio-518092f457c1c28ecf0f6f9de7b4ed4ef7246690834c4aef9d52a0ad013db08e WatchSource:0}: Error finding container 518092f457c1c28ecf0f6f9de7b4ed4ef7246690834c4aef9d52a0ad013db08e: Status 404 returned error can't find the container with id 518092f457c1c28ecf0f6f9de7b4ed4ef7246690834c4aef9d52a0ad013db08e Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.115221 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.196995 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89q5w"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.211346 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lmzsx"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.212335 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.215768 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.261660 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="855a2824-4e4a-4eae-9e71-3bc0db42f169" path="/var/lib/kubelet/pods/855a2824-4e4a-4eae-9e71-3bc0db42f169/volumes" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.262479 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.263083 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b710111d-81c5-463d-b2ea-f7f3f5e27b90" path="/var/lib/kubelet/pods/b710111d-81c5-463d-b2ea-f7f3f5e27b90/volumes" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.264055 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmzsx"] Feb 27 16:57:18 crc kubenswrapper[4708]: W0227 16:57:18.274977 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode11dd889_39c0_43fc_aae8_fef332bad5ed.slice/crio-4372a6e02ae0ecc2db3a805029d885f7e27aad76c499894849a37edf1ef04a06 WatchSource:0}: Error finding container 4372a6e02ae0ecc2db3a805029d885f7e27aad76c499894849a37edf1ef04a06: Status 404 returned error can't find the container with id 4372a6e02ae0ecc2db3a805029d885f7e27aad76c499894849a37edf1ef04a06 Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.297059 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4czbr\" (UniqueName: \"kubernetes.io/projected/96160365-88cf-419c-a2d2-04818cde5016-kube-api-access-4czbr\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.297105 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-utilities\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.297189 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-catalog-content\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.319951 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmm5v"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.326184 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-678487474-jn4cf"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.397929 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4czbr\" (UniqueName: \"kubernetes.io/projected/96160365-88cf-419c-a2d2-04818cde5016-kube-api-access-4czbr\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.397966 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-utilities\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.398038 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-catalog-content\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.398453 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-catalog-content\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.398941 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-utilities\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: W0227 16:57:18.414097 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e3175a0_bf5f_4dfa_9fa4_4066a7e7ae46.slice/crio-cd56577fc561891e4ecdda0df4f9437f6321b4b8fb09200cc3385130e5718ed6 WatchSource:0}: Error finding container cd56577fc561891e4ecdda0df4f9437f6321b4b8fb09200cc3385130e5718ed6: Status 404 returned error can't find the container with id cd56577fc561891e4ecdda0df4f9437f6321b4b8fb09200cc3385130e5718ed6 Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.420546 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4czbr\" (UniqueName: \"kubernetes.io/projected/96160365-88cf-419c-a2d2-04818cde5016-kube-api-access-4czbr\") pod \"redhat-operators-lmzsx\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.434811 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" event={"ID":"e11dd889-39c0-43fc-aae8-fef332bad5ed","Type":"ContainerStarted","Data":"4372a6e02ae0ecc2db3a805029d885f7e27aad76c499894849a37edf1ef04a06"} Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.438228 4708 generic.go:334] "Generic (PLEG): container finished" podID="5c38d70c-968f-44dd-b42b-013bc033debb" containerID="ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2" exitCode=0 Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.438288 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5lwl" event={"ID":"5c38d70c-968f-44dd-b42b-013bc033debb","Type":"ContainerDied","Data":"ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2"} Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.438303 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5lwl" event={"ID":"5c38d70c-968f-44dd-b42b-013bc033debb","Type":"ContainerStarted","Data":"8b65fd6ba1f80c3c40ee28b6e921689ffa5e1afd03e6422ab1d750d75b886657"} Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.447968 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" event={"ID":"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c","Type":"ContainerStarted","Data":"58e847b3305bbff64cbea9186ce35f98f85364d0fb901752d5f3b868a3c40eb9"} Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.447994 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.448004 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" event={"ID":"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c","Type":"ContainerStarted","Data":"518092f457c1c28ecf0f6f9de7b4ed4ef7246690834c4aef9d52a0ad013db08e"} Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.479722 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" podStartSLOduration=1.479704428 podStartE2EDuration="1.479704428s" podCreationTimestamp="2026-02-27 16:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:18.476972876 +0000 UTC m=+236.992770463" watchObservedRunningTime="2026-02-27 16:57:18.479704428 +0000 UTC m=+236.995502035" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.592678 4708 ???:1] "http: TLS handshake error from 192.168.126.11:47188: no serving certificate available for the kubelet" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.605259 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.608723 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.609387 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.610521 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.614748 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.615132 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.629007 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-j29cw"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.631187 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.641081 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.651974 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j29cw"] Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.708471 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-catalog-content\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.708524 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.708556 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-utilities\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.708579 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5jjf\" (UniqueName: \"kubernetes.io/projected/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-kube-api-access-p5jjf\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.708601 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.736448 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.809581 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-catalog-content\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.809638 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.809664 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-utilities\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.809681 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5jjf\" (UniqueName: \"kubernetes.io/projected/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-kube-api-access-p5jjf\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.809697 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.809787 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.810193 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-catalog-content\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.810604 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-utilities\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.827542 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:18 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:18 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:18 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.827593 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.831518 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.833252 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5jjf\" (UniqueName: \"kubernetes.io/projected/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-kube-api-access-p5jjf\") pod \"redhat-operators-j29cw\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.982343 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:18 crc kubenswrapper[4708]: I0227 16:57:18.987490 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.031189 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lmzsx"] Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.128728 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.128787 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.138913 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.277132 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.306368 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-nw84d" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.314360 4708 patch_prober.go:28] interesting pod/downloads-7954f5f757-bhsw7 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.314420 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-bhsw7" podUID="3bbf873e-72f0-4743-a2bc-4866dd8b8f86" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.314500 4708 patch_prober.go:28] interesting pod/downloads-7954f5f757-bhsw7 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" start-of-body= Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.314548 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-bhsw7" podUID="3bbf873e-72f0-4743-a2bc-4866dd8b8f86" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.13:8080/\": dial tcp 10.217.0.13:8080: connect: connection refused" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.446198 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.448257 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.486771 4708 patch_prober.go:28] interesting pod/console-f9d7485db-cl8l9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.486862 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cl8l9" podUID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.564322 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" event={"ID":"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46","Type":"ContainerStarted","Data":"d57edeb945e6d47bc361cf0305cbda054ce46dcd664505e8b47b4f6517c05303"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.564399 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" event={"ID":"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46","Type":"ContainerStarted","Data":"cd56577fc561891e4ecdda0df4f9437f6321b4b8fb09200cc3385130e5718ed6"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.565240 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.602510 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" event={"ID":"e11dd889-39c0-43fc-aae8-fef332bad5ed","Type":"ContainerStarted","Data":"12cebafa1a507c3f4ff844d79a3b0b287ff7130b6df079faa50db413e374c33f"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.602549 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.604989 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.638670 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" podStartSLOduration=2.638653767 podStartE2EDuration="2.638653767s" podCreationTimestamp="2026-02-27 16:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:19.636991264 +0000 UTC m=+238.152788851" watchObservedRunningTime="2026-02-27 16:57:19.638653767 +0000 UTC m=+238.154451354" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.646929 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmzsx" event={"ID":"96160365-88cf-419c-a2d2-04818cde5016","Type":"ContainerStarted","Data":"ea224760ab242c4b2f7e13a45af44649e30ebe5272193b5ccb839038aeaf37e0"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.663873 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3dbc9e2-dda0-4089-b070-bb06b8369491","Type":"ContainerStarted","Data":"d5c7ae245b14811dc37746868ec461dbc9bd072ca18d32ad0fce14bc6320181b"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.663926 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3dbc9e2-dda0-4089-b070-bb06b8369491","Type":"ContainerStarted","Data":"f38707d1e637fcf5637e8d76606c1c3abfed4b63206a39453237d7f0dcf53919"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.667380 4708 generic.go:334] "Generic (PLEG): container finished" podID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerID="10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d" exitCode=0 Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.667770 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmm5v" event={"ID":"b091d644-ad3d-4b63-976d-16e3c0caa3e4","Type":"ContainerDied","Data":"10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.667819 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmm5v" event={"ID":"b091d644-ad3d-4b63-976d-16e3c0caa3e4","Type":"ContainerStarted","Data":"2af99ec97af2e9143f059e117dd6422e291d2e52ee08c7641653b798c8e2b802"} Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.682038 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-d7z7j" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.775719 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" podStartSLOduration=187.775697852 podStartE2EDuration="3m7.775697852s" podCreationTimestamp="2026-02-27 16:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:19.72773804 +0000 UTC m=+238.243535627" watchObservedRunningTime="2026-02-27 16:57:19.775697852 +0000 UTC m=+238.291495439" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.796116 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.820771 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.827998 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:19 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:19 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:19 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.828048 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.854582 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.854564996 podStartE2EDuration="2.854564996s" podCreationTimestamp="2026-02-27 16:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:19.854514095 +0000 UTC m=+238.370311682" watchObservedRunningTime="2026-02-27 16:57:19.854564996 +0000 UTC m=+238.370362583" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.872279 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:57:19 crc kubenswrapper[4708]: I0227 16:57:19.995942 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-j29cw"] Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.680557 4708 generic.go:334] "Generic (PLEG): container finished" podID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerID="022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc" exitCode=0 Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.680788 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j29cw" event={"ID":"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db","Type":"ContainerDied","Data":"022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc"} Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.681008 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j29cw" event={"ID":"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db","Type":"ContainerStarted","Data":"9376441537c2a3cb6e7e5ae47a749215f4af3e48a2782d36e0fbc14fcdbe5d18"} Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.699205 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmzsx" event={"ID":"96160365-88cf-419c-a2d2-04818cde5016","Type":"ContainerDied","Data":"ee27923e89f621ba2573099938ab38bd367b986ea4e351460267ac6b5a73757c"} Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.699894 4708 generic.go:334] "Generic (PLEG): container finished" podID="96160365-88cf-419c-a2d2-04818cde5016" containerID="ee27923e89f621ba2573099938ab38bd367b986ea4e351460267ac6b5a73757c" exitCode=0 Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.704082 4708 generic.go:334] "Generic (PLEG): container finished" podID="a3dbc9e2-dda0-4089-b070-bb06b8369491" containerID="d5c7ae245b14811dc37746868ec461dbc9bd072ca18d32ad0fce14bc6320181b" exitCode=0 Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.704145 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3dbc9e2-dda0-4089-b070-bb06b8369491","Type":"ContainerDied","Data":"d5c7ae245b14811dc37746868ec461dbc9bd072ca18d32ad0fce14bc6320181b"} Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.707811 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d5bcf86-041b-4cf2-9736-3a16b380a5aa","Type":"ContainerStarted","Data":"b356ff4a1229fb52d5e0e93a52130a2560678e23600e15740db3f73b3e33bf58"} Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.707898 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d5bcf86-041b-4cf2-9736-3a16b380a5aa","Type":"ContainerStarted","Data":"2b0ae27d95d71f5e2befc54404cbb6b7616197e305688a96f832f495b0af0c56"} Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.824313 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:20 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:20 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:20 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:20 crc kubenswrapper[4708]: I0227 16:57:20.824478 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:21 crc kubenswrapper[4708]: I0227 16:57:21.723072 4708 generic.go:334] "Generic (PLEG): container finished" podID="1284b6e4-1c2c-443e-b18d-163396ede328" containerID="a8f01e7af3e88c8f59248409dbd41d37754ddfccda0e0f2944ffb70cfed48674" exitCode=0 Feb 27 16:57:21 crc kubenswrapper[4708]: I0227 16:57:21.723131 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" event={"ID":"1284b6e4-1c2c-443e-b18d-163396ede328","Type":"ContainerDied","Data":"a8f01e7af3e88c8f59248409dbd41d37754ddfccda0e0f2944ffb70cfed48674"} Feb 27 16:57:21 crc kubenswrapper[4708]: I0227 16:57:21.732864 4708 generic.go:334] "Generic (PLEG): container finished" podID="4d5bcf86-041b-4cf2-9736-3a16b380a5aa" containerID="b356ff4a1229fb52d5e0e93a52130a2560678e23600e15740db3f73b3e33bf58" exitCode=0 Feb 27 16:57:21 crc kubenswrapper[4708]: I0227 16:57:21.733009 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d5bcf86-041b-4cf2-9736-3a16b380a5aa","Type":"ContainerDied","Data":"b356ff4a1229fb52d5e0e93a52130a2560678e23600e15740db3f73b3e33bf58"} Feb 27 16:57:21 crc kubenswrapper[4708]: I0227 16:57:21.740578 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.7405647589999997 podStartE2EDuration="3.740564759s" podCreationTimestamp="2026-02-27 16:57:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:57:20.75319701 +0000 UTC m=+239.268994597" watchObservedRunningTime="2026-02-27 16:57:21.740564759 +0000 UTC m=+240.256362346" Feb 27 16:57:21 crc kubenswrapper[4708]: I0227 16:57:21.821535 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:21 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:21 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:21 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:21 crc kubenswrapper[4708]: I0227 16:57:21.821977 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.214129 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.411191 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dbc9e2-dda0-4089-b070-bb06b8369491-kubelet-dir\") pod \"a3dbc9e2-dda0-4089-b070-bb06b8369491\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.411297 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a3dbc9e2-dda0-4089-b070-bb06b8369491-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a3dbc9e2-dda0-4089-b070-bb06b8369491" (UID: "a3dbc9e2-dda0-4089-b070-bb06b8369491"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.412085 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dbc9e2-dda0-4089-b070-bb06b8369491-kube-api-access\") pod \"a3dbc9e2-dda0-4089-b070-bb06b8369491\" (UID: \"a3dbc9e2-dda0-4089-b070-bb06b8369491\") " Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.412380 4708 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3dbc9e2-dda0-4089-b070-bb06b8369491-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.421359 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3dbc9e2-dda0-4089-b070-bb06b8369491-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a3dbc9e2-dda0-4089-b070-bb06b8369491" (UID: "a3dbc9e2-dda0-4089-b070-bb06b8369491"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.512752 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a3dbc9e2-dda0-4089-b070-bb06b8369491-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.531576 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qvsn8" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.770549 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"a3dbc9e2-dda0-4089-b070-bb06b8369491","Type":"ContainerDied","Data":"f38707d1e637fcf5637e8d76606c1c3abfed4b63206a39453237d7f0dcf53919"} Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.770601 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f38707d1e637fcf5637e8d76606c1c3abfed4b63206a39453237d7f0dcf53919" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.770672 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.822395 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:22 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:22 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:22 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:22 crc kubenswrapper[4708]: I0227 16:57:22.822465 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.067956 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.072016 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.174282 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.220414 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kubelet-dir\") pod \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.220479 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kube-api-access\") pod \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\" (UID: \"4d5bcf86-041b-4cf2-9736-3a16b380a5aa\") " Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.221307 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4d5bcf86-041b-4cf2-9736-3a16b380a5aa" (UID: "4d5bcf86-041b-4cf2-9736-3a16b380a5aa"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.227674 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4d5bcf86-041b-4cf2-9736-3a16b380a5aa" (UID: "4d5bcf86-041b-4cf2-9736-3a16b380a5aa"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.322144 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tq2j\" (UniqueName: \"kubernetes.io/projected/1284b6e4-1c2c-443e-b18d-163396ede328-kube-api-access-5tq2j\") pod \"1284b6e4-1c2c-443e-b18d-163396ede328\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.322245 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1284b6e4-1c2c-443e-b18d-163396ede328-secret-volume\") pod \"1284b6e4-1c2c-443e-b18d-163396ede328\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.322341 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1284b6e4-1c2c-443e-b18d-163396ede328-config-volume\") pod \"1284b6e4-1c2c-443e-b18d-163396ede328\" (UID: \"1284b6e4-1c2c-443e-b18d-163396ede328\") " Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.326104 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.326138 4708 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4d5bcf86-041b-4cf2-9736-3a16b380a5aa-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.327177 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1284b6e4-1c2c-443e-b18d-163396ede328-config-volume" (OuterVolumeSpecName: "config-volume") pod "1284b6e4-1c2c-443e-b18d-163396ede328" (UID: "1284b6e4-1c2c-443e-b18d-163396ede328"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.328028 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1284b6e4-1c2c-443e-b18d-163396ede328-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "1284b6e4-1c2c-443e-b18d-163396ede328" (UID: "1284b6e4-1c2c-443e-b18d-163396ede328"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.350423 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1284b6e4-1c2c-443e-b18d-163396ede328-kube-api-access-5tq2j" (OuterVolumeSpecName: "kube-api-access-5tq2j") pod "1284b6e4-1c2c-443e-b18d-163396ede328" (UID: "1284b6e4-1c2c-443e-b18d-163396ede328"). InnerVolumeSpecName "kube-api-access-5tq2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.431567 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tq2j\" (UniqueName: \"kubernetes.io/projected/1284b6e4-1c2c-443e-b18d-163396ede328-kube-api-access-5tq2j\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.431611 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/1284b6e4-1c2c-443e-b18d-163396ede328-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.431625 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1284b6e4-1c2c-443e-b18d-163396ede328-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.735025 4708 ???:1] "http: TLS handshake error from 192.168.126.11:47200: no serving certificate available for the kubelet" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.812704 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.814873 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p" event={"ID":"1284b6e4-1c2c-443e-b18d-163396ede328","Type":"ContainerDied","Data":"32e99d0a63839f70764fc4eeb5774865e858a315aff248e02a79a02d776f142a"} Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.814925 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="32e99d0a63839f70764fc4eeb5774865e858a315aff248e02a79a02d776f142a" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.821440 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:23 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:23 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:23 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.821494 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.829132 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4d5bcf86-041b-4cf2-9736-3a16b380a5aa","Type":"ContainerDied","Data":"2b0ae27d95d71f5e2befc54404cbb6b7616197e305688a96f832f495b0af0c56"} Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.829166 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b0ae27d95d71f5e2befc54404cbb6b7616197e305688a96f832f495b0af0c56" Feb 27 16:57:23 crc kubenswrapper[4708]: I0227 16:57:23.829241 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Feb 27 16:57:24 crc kubenswrapper[4708]: I0227 16:57:24.826069 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:24 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:24 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:24 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:24 crc kubenswrapper[4708]: I0227 16:57:24.826136 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:25 crc kubenswrapper[4708]: I0227 16:57:25.011875 4708 ???:1] "http: TLS handshake error from 192.168.126.11:47204: no serving certificate available for the kubelet" Feb 27 16:57:25 crc kubenswrapper[4708]: I0227 16:57:25.820835 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:25 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:25 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:25 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:25 crc kubenswrapper[4708]: I0227 16:57:25.821220 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:26 crc kubenswrapper[4708]: I0227 16:57:26.820970 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:26 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:26 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:26 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:26 crc kubenswrapper[4708]: I0227 16:57:26.821143 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:27 crc kubenswrapper[4708]: I0227 16:57:27.822533 4708 patch_prober.go:28] interesting pod/router-default-5444994796-n69rk container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Feb 27 16:57:27 crc kubenswrapper[4708]: [-]has-synced failed: reason withheld Feb 27 16:57:27 crc kubenswrapper[4708]: [+]process-running ok Feb 27 16:57:27 crc kubenswrapper[4708]: healthz check failed Feb 27 16:57:27 crc kubenswrapper[4708]: I0227 16:57:27.822686 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-n69rk" podUID="f91736b1-bf6f-426e-8c0f-cfaac70c16f1" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Feb 27 16:57:28 crc kubenswrapper[4708]: I0227 16:57:28.821952 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:28 crc kubenswrapper[4708]: I0227 16:57:28.824827 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-n69rk" Feb 27 16:57:29 crc kubenswrapper[4708]: I0227 16:57:29.318727 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-bhsw7" Feb 27 16:57:29 crc kubenswrapper[4708]: I0227 16:57:29.444241 4708 patch_prober.go:28] interesting pod/console-f9d7485db-cl8l9 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Feb 27 16:57:29 crc kubenswrapper[4708]: I0227 16:57:29.444309 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-cl8l9" podUID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" containerName="console" probeResult="failure" output="Get \"https://10.217.0.6:8443/health\": dial tcp 10.217.0.6:8443: connect: connection refused" Feb 27 16:57:34 crc kubenswrapper[4708]: I0227 16:57:34.007523 4708 ???:1] "http: TLS handshake error from 192.168.126.11:50906: no serving certificate available for the kubelet" Feb 27 16:57:35 crc kubenswrapper[4708]: I0227 16:57:35.632542 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:57:35 crc kubenswrapper[4708]: I0227 16:57:35.632676 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:57:35 crc kubenswrapper[4708]: I0227 16:57:35.674569 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678487474-jn4cf"] Feb 27 16:57:35 crc kubenswrapper[4708]: I0227 16:57:35.674961 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" podUID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" containerName="controller-manager" containerID="cri-o://d57edeb945e6d47bc361cf0305cbda054ce46dcd664505e8b47b4f6517c05303" gracePeriod=30 Feb 27 16:57:35 crc kubenswrapper[4708]: I0227 16:57:35.697464 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5"] Feb 27 16:57:35 crc kubenswrapper[4708]: I0227 16:57:35.698247 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" podUID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" containerName="route-controller-manager" containerID="cri-o://58e847b3305bbff64cbea9186ce35f98f85364d0fb901752d5f3b868a3c40eb9" gracePeriod=30 Feb 27 16:57:36 crc kubenswrapper[4708]: I0227 16:57:36.925829 4708 generic.go:334] "Generic (PLEG): container finished" podID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" containerID="d57edeb945e6d47bc361cf0305cbda054ce46dcd664505e8b47b4f6517c05303" exitCode=0 Feb 27 16:57:36 crc kubenswrapper[4708]: I0227 16:57:36.925928 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" event={"ID":"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46","Type":"ContainerDied","Data":"d57edeb945e6d47bc361cf0305cbda054ce46dcd664505e8b47b4f6517c05303"} Feb 27 16:57:37 crc kubenswrapper[4708]: I0227 16:57:37.724235 4708 patch_prober.go:28] interesting pod/controller-manager-678487474-jn4cf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 27 16:57:37 crc kubenswrapper[4708]: I0227 16:57:37.724334 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" podUID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 27 16:57:37 crc kubenswrapper[4708]: I0227 16:57:37.737814 4708 patch_prober.go:28] interesting pod/route-controller-manager-54b776fb6d-xzjl5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 27 16:57:37 crc kubenswrapper[4708]: I0227 16:57:37.737906 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" podUID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 27 16:57:37 crc kubenswrapper[4708]: I0227 16:57:37.908487 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 16:57:37 crc kubenswrapper[4708]: I0227 16:57:37.937376 4708 generic.go:334] "Generic (PLEG): container finished" podID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" containerID="58e847b3305bbff64cbea9186ce35f98f85364d0fb901752d5f3b868a3c40eb9" exitCode=0 Feb 27 16:57:37 crc kubenswrapper[4708]: I0227 16:57:37.937421 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" event={"ID":"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c","Type":"ContainerDied","Data":"58e847b3305bbff64cbea9186ce35f98f85364d0fb901752d5f3b868a3c40eb9"} Feb 27 16:57:39 crc kubenswrapper[4708]: I0227 16:57:39.452279 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:39 crc kubenswrapper[4708]: I0227 16:57:39.459462 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 16:57:47 crc kubenswrapper[4708]: I0227 16:57:47.724630 4708 patch_prober.go:28] interesting pod/controller-manager-678487474-jn4cf container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" start-of-body= Feb 27 16:57:47 crc kubenswrapper[4708]: I0227 16:57:47.725240 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" podUID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.50:8443/healthz\": dial tcp 10.217.0.50:8443: connect: connection refused" Feb 27 16:57:47 crc kubenswrapper[4708]: I0227 16:57:47.737458 4708 patch_prober.go:28] interesting pod/route-controller-manager-54b776fb6d-xzjl5 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" start-of-body= Feb 27 16:57:47 crc kubenswrapper[4708]: I0227 16:57:47.737518 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" podUID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.51:8443/healthz\": dial tcp 10.217.0.51:8443: connect: connection refused" Feb 27 16:57:49 crc kubenswrapper[4708]: E0227 16:57:49.232264 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 16:57:49 crc kubenswrapper[4708]: E0227 16:57:49.233084 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 16:57:49 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 16:57:49 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mjwdq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536856-lj688_openshift-infra(c8c016d5-5c1f-4680-a678-8568d218617e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled Feb 27 16:57:49 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 16:57:49 crc kubenswrapper[4708]: E0227 16:57:49.234432 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-infra/auto-csr-approver-29536856-lj688" podUID="c8c016d5-5c1f-4680-a678-8568d218617e" Feb 27 16:57:49 crc kubenswrapper[4708]: E0227 16:57:49.489578 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536856-lj688" podUID="c8c016d5-5c1f-4680-a678-8568d218617e" Feb 27 16:57:49 crc kubenswrapper[4708]: I0227 16:57:49.823318 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5pcgl" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.363417 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.367252 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.415126 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4"] Feb 27 16:57:52 crc kubenswrapper[4708]: E0227 16:57:52.416329 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1284b6e4-1c2c-443e-b18d-163396ede328" containerName="collect-profiles" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416353 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1284b6e4-1c2c-443e-b18d-163396ede328" containerName="collect-profiles" Feb 27 16:57:52 crc kubenswrapper[4708]: E0227 16:57:52.416362 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" containerName="controller-manager" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416369 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" containerName="controller-manager" Feb 27 16:57:52 crc kubenswrapper[4708]: E0227 16:57:52.416380 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" containerName="route-controller-manager" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416406 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" containerName="route-controller-manager" Feb 27 16:57:52 crc kubenswrapper[4708]: E0227 16:57:52.416417 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d5bcf86-041b-4cf2-9736-3a16b380a5aa" containerName="pruner" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416423 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d5bcf86-041b-4cf2-9736-3a16b380a5aa" containerName="pruner" Feb 27 16:57:52 crc kubenswrapper[4708]: E0227 16:57:52.416431 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a3dbc9e2-dda0-4089-b070-bb06b8369491" containerName="pruner" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416436 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3dbc9e2-dda0-4089-b070-bb06b8369491" containerName="pruner" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416639 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a3dbc9e2-dda0-4089-b070-bb06b8369491" containerName="pruner" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416649 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" containerName="route-controller-manager" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416658 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" containerName="controller-manager" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416666 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d5bcf86-041b-4cf2-9736-3a16b380a5aa" containerName="pruner" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.416676 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1284b6e4-1c2c-443e-b18d-163396ede328" containerName="collect-profiles" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.417297 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.419494 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4"] Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469551 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-serving-cert\") pod \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469611 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-proxy-ca-bundles\") pod \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469646 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsmg9\" (UniqueName: \"kubernetes.io/projected/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-kube-api-access-wsmg9\") pod \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469693 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rq686\" (UniqueName: \"kubernetes.io/projected/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-kube-api-access-rq686\") pod \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469738 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-serving-cert\") pod \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469782 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-client-ca\") pod \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469806 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-config\") pod \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\" (UID: \"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469824 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-client-ca\") pod \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.469884 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-config\") pod \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\" (UID: \"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c\") " Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.470729 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-client-ca" (OuterVolumeSpecName: "client-ca") pod "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" (UID: "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.471129 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-client-ca" (OuterVolumeSpecName: "client-ca") pod "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" (UID: "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.471138 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.471150 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" (UID: "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.471261 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-config" (OuterVolumeSpecName: "config") pod "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" (UID: "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.471333 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-config" (OuterVolumeSpecName: "config") pod "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" (UID: "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.479095 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" (UID: "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.479372 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" (UID: "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.479602 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-kube-api-access-rq686" (OuterVolumeSpecName: "kube-api-access-rq686") pod "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" (UID: "8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46"). InnerVolumeSpecName "kube-api-access-rq686". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.480668 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-kube-api-access-wsmg9" (OuterVolumeSpecName: "kube-api-access-wsmg9") pod "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" (UID: "75da6fa3-efd9-4f21-a7bc-ec0db67ed26c"). InnerVolumeSpecName "kube-api-access-wsmg9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.505024 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" event={"ID":"8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46","Type":"ContainerDied","Data":"cd56577fc561891e4ecdda0df4f9437f6321b4b8fb09200cc3385130e5718ed6"} Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.505090 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-678487474-jn4cf" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.505112 4708 scope.go:117] "RemoveContainer" containerID="d57edeb945e6d47bc361cf0305cbda054ce46dcd664505e8b47b4f6517c05303" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.508658 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" event={"ID":"75da6fa3-efd9-4f21-a7bc-ec0db67ed26c","Type":"ContainerDied","Data":"518092f457c1c28ecf0f6f9de7b4ed4ef7246690834c4aef9d52a0ad013db08e"} Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.508763 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.545364 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-678487474-jn4cf"] Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.547780 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-678487474-jn4cf"] Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.558580 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5"] Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.564060 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-54b776fb6d-xzjl5"] Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572524 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-config\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572569 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6cfp\" (UniqueName: \"kubernetes.io/projected/259ffaeb-0da0-4c0a-af53-c67058301b51-kube-api-access-f6cfp\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572604 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259ffaeb-0da0-4c0a-af53-c67058301b51-serving-cert\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572650 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-client-ca\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572692 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572703 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572715 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572727 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsmg9\" (UniqueName: \"kubernetes.io/projected/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c-kube-api-access-wsmg9\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572737 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rq686\" (UniqueName: \"kubernetes.io/projected/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-kube-api-access-rq686\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572746 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572754 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.572762 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.674078 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-config\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.674137 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6cfp\" (UniqueName: \"kubernetes.io/projected/259ffaeb-0da0-4c0a-af53-c67058301b51-kube-api-access-f6cfp\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.674173 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259ffaeb-0da0-4c0a-af53-c67058301b51-serving-cert\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.674220 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-client-ca\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.675053 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-client-ca\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.675238 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-config\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.680257 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259ffaeb-0da0-4c0a-af53-c67058301b51-serving-cert\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.694606 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6cfp\" (UniqueName: \"kubernetes.io/projected/259ffaeb-0da0-4c0a-af53-c67058301b51-kube-api-access-f6cfp\") pod \"route-controller-manager-656589f589-xw5p4\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:52 crc kubenswrapper[4708]: I0227 16:57:52.741263 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.073589 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.403818 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.404487 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.406665 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.407271 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.448574 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.485676 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.485797 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.587324 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.587428 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.587439 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.610809 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:53 crc kubenswrapper[4708]: I0227 16:57:53.735689 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:57:54 crc kubenswrapper[4708]: E0227 16:57:54.085869 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 16:57:54 crc kubenswrapper[4708]: E0227 16:57:54.086158 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d8m5l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-xmm5v_openshift-marketplace(b091d644-ad3d-4b63-976d-16e3c0caa3e4): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:57:54 crc kubenswrapper[4708]: E0227 16:57:54.087616 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-xmm5v" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.237161 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75da6fa3-efd9-4f21-a7bc-ec0db67ed26c" path="/var/lib/kubelet/pods/75da6fa3-efd9-4f21-a7bc-ec0db67ed26c/volumes" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.238250 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46" path="/var/lib/kubelet/pods/8e3175a0-bf5f-4dfa-9fa4-4066a7e7ae46/volumes" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.588985 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8"] Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.589744 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.593562 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.593985 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.594387 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.595125 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.595326 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.595365 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.598548 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8"] Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.601787 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.701749 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1739fbfb-7603-41f7-ac9c-43d99a3ad069-serving-cert\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.702086 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltq2k\" (UniqueName: \"kubernetes.io/projected/1739fbfb-7603-41f7-ac9c-43d99a3ad069-kube-api-access-ltq2k\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.702133 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-client-ca\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.702183 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-config\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.702210 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-proxy-ca-bundles\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.803949 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-config\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.804011 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-proxy-ca-bundles\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.804062 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1739fbfb-7603-41f7-ac9c-43d99a3ad069-serving-cert\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.804109 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltq2k\" (UniqueName: \"kubernetes.io/projected/1739fbfb-7603-41f7-ac9c-43d99a3ad069-kube-api-access-ltq2k\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.804154 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-client-ca\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.805316 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-client-ca\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.805392 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-proxy-ca-bundles\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.810540 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1739fbfb-7603-41f7-ac9c-43d99a3ad069-serving-cert\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.814315 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-config\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.820300 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltq2k\" (UniqueName: \"kubernetes.io/projected/1739fbfb-7603-41f7-ac9c-43d99a3ad069-kube-api-access-ltq2k\") pod \"controller-manager-d4dcd9fb7-rfbt8\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:54 crc kubenswrapper[4708]: I0227 16:57:54.956693 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:57:55 crc kubenswrapper[4708]: I0227 16:57:55.624404 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8"] Feb 27 16:57:55 crc kubenswrapper[4708]: I0227 16:57:55.724685 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4"] Feb 27 16:57:56 crc kubenswrapper[4708]: E0227 16:57:56.105760 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-xmm5v" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" Feb 27 16:57:56 crc kubenswrapper[4708]: E0227 16:57:56.189972 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 16:57:56 crc kubenswrapper[4708]: E0227 16:57:56.190104 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-586qv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-7rtdw_openshift-marketplace(9b733486-f273-4bd5-afa3-d35d3d1feafc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:57:56 crc kubenswrapper[4708]: E0227 16:57:56.191422 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-7rtdw" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" Feb 27 16:57:57 crc kubenswrapper[4708]: E0227 16:57:57.652330 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-7rtdw" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" Feb 27 16:57:57 crc kubenswrapper[4708]: E0227 16:57:57.728769 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 16:57:57 crc kubenswrapper[4708]: E0227 16:57:57.729014 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-q4g9c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-zvqlm_openshift-marketplace(5710135c-fd59-4ff6-b74a-ad7ab8730aff): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:57:57 crc kubenswrapper[4708]: E0227 16:57:57.730268 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-zvqlm" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" Feb 27 16:57:57 crc kubenswrapper[4708]: E0227 16:57:57.804167 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 16:57:57 crc kubenswrapper[4708]: E0227 16:57:57.804338 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tq7h6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-p5lwl_openshift-marketplace(5c38d70c-968f-44dd-b42b-013bc033debb): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:57:57 crc kubenswrapper[4708]: E0227 16:57:57.805534 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-p5lwl" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.202820 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.203772 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.213802 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.378478 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-var-lock\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.378531 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.378586 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kube-api-access\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.480061 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-var-lock\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.480115 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.480155 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kube-api-access\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.480220 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kubelet-dir\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.480264 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-var-lock\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.503970 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kube-api-access\") pod \"installer-9-crc\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:57:59 crc kubenswrapper[4708]: I0227 16:57:59.532866 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.135735 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536858-rfn4q"] Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.137609 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.138616 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-rfn4q"] Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.139453 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.293304 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dftml\" (UniqueName: \"kubernetes.io/projected/c99fdbbd-b661-4920-975d-c72e040d08fa-kube-api-access-dftml\") pod \"auto-csr-approver-29536858-rfn4q\" (UID: \"c99fdbbd-b661-4920-975d-c72e040d08fa\") " pod="openshift-infra/auto-csr-approver-29536858-rfn4q" Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.394287 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dftml\" (UniqueName: \"kubernetes.io/projected/c99fdbbd-b661-4920-975d-c72e040d08fa-kube-api-access-dftml\") pod \"auto-csr-approver-29536858-rfn4q\" (UID: \"c99fdbbd-b661-4920-975d-c72e040d08fa\") " pod="openshift-infra/auto-csr-approver-29536858-rfn4q" Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.419403 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dftml\" (UniqueName: \"kubernetes.io/projected/c99fdbbd-b661-4920-975d-c72e040d08fa-kube-api-access-dftml\") pod \"auto-csr-approver-29536858-rfn4q\" (UID: \"c99fdbbd-b661-4920-975d-c72e040d08fa\") " pod="openshift-infra/auto-csr-approver-29536858-rfn4q" Feb 27 16:58:00 crc kubenswrapper[4708]: I0227 16:58:00.461149 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.484642 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-zvqlm" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.484686 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-p5lwl" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" Feb 27 16:58:01 crc kubenswrapper[4708]: I0227 16:58:01.533758 4708 scope.go:117] "RemoveContainer" containerID="58e847b3305bbff64cbea9186ce35f98f85364d0fb901752d5f3b868a3c40eb9" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.576035 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.576210 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4czbr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-lmzsx_openshift-marketplace(96160365-88cf-419c-a2d2-04818cde5016): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.577611 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-lmzsx" podUID="96160365-88cf-419c-a2d2-04818cde5016" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.621262 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.623898 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-p5jjf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-j29cw_openshift-marketplace(73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.626104 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-j29cw" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.651085 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.651247 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gncnx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ggb2w_openshift-marketplace(b2d410d4-9144-42b4-96c9-345732131a7e): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.653113 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ggb2w" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.676457 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.676641 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-44qz8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-hw5dq_openshift-marketplace(70493bd3-d5c2-49e2-bd00-ac98325a2187): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Feb 27 16:58:01 crc kubenswrapper[4708]: E0227 16:58:01.678011 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-hw5dq" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" Feb 27 16:58:01 crc kubenswrapper[4708]: I0227 16:58:01.789902 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.032504 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-rfn4q"] Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.038251 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Feb 27 16:58:02 crc kubenswrapper[4708]: W0227 16:58:02.049629 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod19acf0cc_9d56_41e7_a7cb_4fe6d3695a14.slice/crio-19c2ebfd7f90390746318a5ddf9c75aef3b324005f9fd2f3dd0172aede7cf48d WatchSource:0}: Error finding container 19c2ebfd7f90390746318a5ddf9c75aef3b324005f9fd2f3dd0172aede7cf48d: Status 404 returned error can't find the container with id 19c2ebfd7f90390746318a5ddf9c75aef3b324005f9fd2f3dd0172aede7cf48d Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.123988 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8"] Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.127242 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4"] Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.564722 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" event={"ID":"c99fdbbd-b661-4920-975d-c72e040d08fa","Type":"ContainerStarted","Data":"c7d53ff44f0a6a3e0190f7e36a09ed8e814e5b460f7c778dc80148e02442136d"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.579219 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" event={"ID":"1739fbfb-7603-41f7-ac9c-43d99a3ad069","Type":"ContainerStarted","Data":"ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.579270 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" event={"ID":"1739fbfb-7603-41f7-ac9c-43d99a3ad069","Type":"ContainerStarted","Data":"916b2e1d7485c7cfcf1d99f9b0fede1500fde9584619c730223271b1f80dae57"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.579287 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" podUID="1739fbfb-7603-41f7-ac9c-43d99a3ad069" containerName="controller-manager" containerID="cri-o://ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd" gracePeriod=30 Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.579481 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.585214 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b2eb3a2-5689-482d-82a4-b8ec5edf2418","Type":"ContainerStarted","Data":"76379c603afdfdf3fa0393ce2b048f7789d99f779be6febce1a61b9c21428db3"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.585282 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b2eb3a2-5689-482d-82a4-b8ec5edf2418","Type":"ContainerStarted","Data":"f0accb6493c2316d66914436989427d32324fa6a16570099ad2661ae9c337c79"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.594777 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" event={"ID":"259ffaeb-0da0-4c0a-af53-c67058301b51","Type":"ContainerStarted","Data":"9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.594815 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" event={"ID":"259ffaeb-0da0-4c0a-af53-c67058301b51","Type":"ContainerStarted","Data":"2d39fb896731a7d13cf50dc70b764a1bf78225fcb7782ec55e10a5eb2ec4dc4c"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.594871 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" podUID="259ffaeb-0da0-4c0a-af53-c67058301b51" containerName="route-controller-manager" containerID="cri-o://9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4" gracePeriod=30 Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.594944 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.612057 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" podStartSLOduration=27.61203425 podStartE2EDuration="27.61203425s" podCreationTimestamp="2026-02-27 16:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:02.61129256 +0000 UTC m=+281.127090147" watchObservedRunningTime="2026-02-27 16:58:02.61203425 +0000 UTC m=+281.127831837" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.615505 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14","Type":"ContainerStarted","Data":"579c3e56160318c012b7380ec425474f8ef5e4b2199c521625235fe8774da56f"} Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.615539 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14","Type":"ContainerStarted","Data":"19c2ebfd7f90390746318a5ddf9c75aef3b324005f9fd2f3dd0172aede7cf48d"} Feb 27 16:58:02 crc kubenswrapper[4708]: E0227 16:58:02.616586 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ggb2w" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" Feb 27 16:58:02 crc kubenswrapper[4708]: E0227 16:58:02.617728 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-j29cw" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" Feb 27 16:58:02 crc kubenswrapper[4708]: E0227 16:58:02.617801 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-hw5dq" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" Feb 27 16:58:02 crc kubenswrapper[4708]: E0227 16:58:02.617837 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-lmzsx" podUID="96160365-88cf-419c-a2d2-04818cde5016" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.623968 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.632579 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=3.63256739 podStartE2EDuration="3.63256739s" podCreationTimestamp="2026-02-27 16:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:02.632270942 +0000 UTC m=+281.148068529" watchObservedRunningTime="2026-02-27 16:58:02.63256739 +0000 UTC m=+281.148364967" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.663812 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" podStartSLOduration=27.663788421 podStartE2EDuration="27.663788421s" podCreationTimestamp="2026-02-27 16:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:02.659400666 +0000 UTC m=+281.175198243" watchObservedRunningTime="2026-02-27 16:58:02.663788421 +0000 UTC m=+281.179586008" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.775378 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=9.775355785 podStartE2EDuration="9.775355785s" podCreationTimestamp="2026-02-27 16:57:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:02.773276071 +0000 UTC m=+281.289073658" watchObservedRunningTime="2026-02-27 16:58:02.775355785 +0000 UTC m=+281.291153372" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.966928 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.996720 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-79f56464fc-mm9fm"] Feb 27 16:58:02 crc kubenswrapper[4708]: E0227 16:58:02.997051 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1739fbfb-7603-41f7-ac9c-43d99a3ad069" containerName="controller-manager" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.997074 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1739fbfb-7603-41f7-ac9c-43d99a3ad069" containerName="controller-manager" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.997231 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1739fbfb-7603-41f7-ac9c-43d99a3ad069" containerName="controller-manager" Feb 27 16:58:02 crc kubenswrapper[4708]: I0227 16:58:02.997796 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.020222 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79f56464fc-mm9fm"] Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.141318 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-client-ca\") pod \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.141401 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-proxy-ca-bundles\") pod \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.141536 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-config\") pod \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.141727 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ltq2k\" (UniqueName: \"kubernetes.io/projected/1739fbfb-7603-41f7-ac9c-43d99a3ad069-kube-api-access-ltq2k\") pod \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.141793 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1739fbfb-7603-41f7-ac9c-43d99a3ad069-serving-cert\") pod \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\" (UID: \"1739fbfb-7603-41f7-ac9c-43d99a3ad069\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.142105 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799aa1e9-629d-481e-b62a-fd8e717acc6b-serving-cert\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.142194 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-proxy-ca-bundles\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.142244 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp9vn\" (UniqueName: \"kubernetes.io/projected/799aa1e9-629d-481e-b62a-fd8e717acc6b-kube-api-access-pp9vn\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.142291 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-client-ca\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.142364 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-config\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.143932 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-client-ca" (OuterVolumeSpecName: "client-ca") pod "1739fbfb-7603-41f7-ac9c-43d99a3ad069" (UID: "1739fbfb-7603-41f7-ac9c-43d99a3ad069"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.144525 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "1739fbfb-7603-41f7-ac9c-43d99a3ad069" (UID: "1739fbfb-7603-41f7-ac9c-43d99a3ad069"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.147046 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-config" (OuterVolumeSpecName: "config") pod "1739fbfb-7603-41f7-ac9c-43d99a3ad069" (UID: "1739fbfb-7603-41f7-ac9c-43d99a3ad069"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.152327 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1739fbfb-7603-41f7-ac9c-43d99a3ad069-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1739fbfb-7603-41f7-ac9c-43d99a3ad069" (UID: "1739fbfb-7603-41f7-ac9c-43d99a3ad069"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.152583 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1739fbfb-7603-41f7-ac9c-43d99a3ad069-kube-api-access-ltq2k" (OuterVolumeSpecName: "kube-api-access-ltq2k") pod "1739fbfb-7603-41f7-ac9c-43d99a3ad069" (UID: "1739fbfb-7603-41f7-ac9c-43d99a3ad069"). InnerVolumeSpecName "kube-api-access-ltq2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.170077 4708 csr.go:261] certificate signing request csr-5jjvn is approved, waiting to be issued Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.178782 4708 csr.go:257] certificate signing request csr-5jjvn is issued Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.183390 4708 patch_prober.go:28] interesting pod/route-controller-manager-656589f589-xw5p4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:41868->10.217.0.57:8443: read: connection reset by peer" start-of-body= Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.183468 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" podUID="259ffaeb-0da0-4c0a-af53-c67058301b51" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:41868->10.217.0.57:8443: read: connection reset by peer" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.184084 4708 patch_prober.go:28] interesting pod/route-controller-manager-656589f589-xw5p4 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" start-of-body= Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.184146 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" podUID="259ffaeb-0da0-4c0a-af53-c67058301b51" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": dial tcp 10.217.0.57:8443: connect: connection refused" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243659 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799aa1e9-629d-481e-b62a-fd8e717acc6b-serving-cert\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243720 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp9vn\" (UniqueName: \"kubernetes.io/projected/799aa1e9-629d-481e-b62a-fd8e717acc6b-kube-api-access-pp9vn\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243746 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-proxy-ca-bundles\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243771 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-client-ca\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243814 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-config\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243884 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243898 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ltq2k\" (UniqueName: \"kubernetes.io/projected/1739fbfb-7603-41f7-ac9c-43d99a3ad069-kube-api-access-ltq2k\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243909 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1739fbfb-7603-41f7-ac9c-43d99a3ad069-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243919 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.243927 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/1739fbfb-7603-41f7-ac9c-43d99a3ad069-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.246869 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-client-ca\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.247068 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-proxy-ca-bundles\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.247570 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-config\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.248593 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799aa1e9-629d-481e-b62a-fd8e717acc6b-serving-cert\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.263087 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp9vn\" (UniqueName: \"kubernetes.io/projected/799aa1e9-629d-481e-b62a-fd8e717acc6b-kube-api-access-pp9vn\") pod \"controller-manager-79f56464fc-mm9fm\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.322574 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.422653 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-656589f589-xw5p4_259ffaeb-0da0-4c0a-af53-c67058301b51/route-controller-manager/0.log" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.422735 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.547913 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-config\") pod \"259ffaeb-0da0-4c0a-af53-c67058301b51\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.548339 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6cfp\" (UniqueName: \"kubernetes.io/projected/259ffaeb-0da0-4c0a-af53-c67058301b51-kube-api-access-f6cfp\") pod \"259ffaeb-0da0-4c0a-af53-c67058301b51\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.548379 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259ffaeb-0da0-4c0a-af53-c67058301b51-serving-cert\") pod \"259ffaeb-0da0-4c0a-af53-c67058301b51\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.548410 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-client-ca\") pod \"259ffaeb-0da0-4c0a-af53-c67058301b51\" (UID: \"259ffaeb-0da0-4c0a-af53-c67058301b51\") " Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.548887 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-config" (OuterVolumeSpecName: "config") pod "259ffaeb-0da0-4c0a-af53-c67058301b51" (UID: "259ffaeb-0da0-4c0a-af53-c67058301b51"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.549094 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-client-ca" (OuterVolumeSpecName: "client-ca") pod "259ffaeb-0da0-4c0a-af53-c67058301b51" (UID: "259ffaeb-0da0-4c0a-af53-c67058301b51"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.554029 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/259ffaeb-0da0-4c0a-af53-c67058301b51-kube-api-access-f6cfp" (OuterVolumeSpecName: "kube-api-access-f6cfp") pod "259ffaeb-0da0-4c0a-af53-c67058301b51" (UID: "259ffaeb-0da0-4c0a-af53-c67058301b51"). InnerVolumeSpecName "kube-api-access-f6cfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.555574 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/259ffaeb-0da0-4c0a-af53-c67058301b51-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "259ffaeb-0da0-4c0a-af53-c67058301b51" (UID: "259ffaeb-0da0-4c0a-af53-c67058301b51"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.558186 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-79f56464fc-mm9fm"] Feb 27 16:58:03 crc kubenswrapper[4708]: W0227 16:58:03.572349 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod799aa1e9_629d_481e_b62a_fd8e717acc6b.slice/crio-8de0acf2f9c829569d4b4caf0248e0bf610c34ef30957c0f254cc01ab46d64e6 WatchSource:0}: Error finding container 8de0acf2f9c829569d4b4caf0248e0bf610c34ef30957c0f254cc01ab46d64e6: Status 404 returned error can't find the container with id 8de0acf2f9c829569d4b4caf0248e0bf610c34ef30957c0f254cc01ab46d64e6 Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.623287 4708 generic.go:334] "Generic (PLEG): container finished" podID="c8c016d5-5c1f-4680-a678-8568d218617e" containerID="1e577a12ac8338e8a615ae393e48e602dfdf4491cf06ebec6dfae1b4cbfc399c" exitCode=0 Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.623373 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536856-lj688" event={"ID":"c8c016d5-5c1f-4680-a678-8568d218617e","Type":"ContainerDied","Data":"1e577a12ac8338e8a615ae393e48e602dfdf4491cf06ebec6dfae1b4cbfc399c"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.626166 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" event={"ID":"c99fdbbd-b661-4920-975d-c72e040d08fa","Type":"ContainerStarted","Data":"7c3afdfc9ffdea31879ef9a422441c28ca30755918ab57761fd5f28be8a5469c"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.627561 4708 generic.go:334] "Generic (PLEG): container finished" podID="1739fbfb-7603-41f7-ac9c-43d99a3ad069" containerID="ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd" exitCode=0 Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.627618 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" event={"ID":"1739fbfb-7603-41f7-ac9c-43d99a3ad069","Type":"ContainerDied","Data":"ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.627649 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" event={"ID":"1739fbfb-7603-41f7-ac9c-43d99a3ad069","Type":"ContainerDied","Data":"916b2e1d7485c7cfcf1d99f9b0fede1500fde9584619c730223271b1f80dae57"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.627668 4708 scope.go:117] "RemoveContainer" containerID="ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.627770 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.637253 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-656589f589-xw5p4_259ffaeb-0da0-4c0a-af53-c67058301b51/route-controller-manager/0.log" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.637312 4708 generic.go:334] "Generic (PLEG): container finished" podID="259ffaeb-0da0-4c0a-af53-c67058301b51" containerID="9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4" exitCode=255 Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.637412 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.637422 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" event={"ID":"259ffaeb-0da0-4c0a-af53-c67058301b51","Type":"ContainerDied","Data":"9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.637545 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4" event={"ID":"259ffaeb-0da0-4c0a-af53-c67058301b51","Type":"ContainerDied","Data":"2d39fb896731a7d13cf50dc70b764a1bf78225fcb7782ec55e10a5eb2ec4dc4c"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.639931 4708 generic.go:334] "Generic (PLEG): container finished" podID="19acf0cc-9d56-41e7-a7cb-4fe6d3695a14" containerID="579c3e56160318c012b7380ec425474f8ef5e4b2199c521625235fe8774da56f" exitCode=0 Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.639993 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14","Type":"ContainerDied","Data":"579c3e56160318c012b7380ec425474f8ef5e4b2199c521625235fe8774da56f"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.642390 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" event={"ID":"799aa1e9-629d-481e-b62a-fd8e717acc6b","Type":"ContainerStarted","Data":"8de0acf2f9c829569d4b4caf0248e0bf610c34ef30957c0f254cc01ab46d64e6"} Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.648457 4708 scope.go:117] "RemoveContainer" containerID="ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd" Feb 27 16:58:03 crc kubenswrapper[4708]: E0227 16:58:03.648954 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd\": container with ID starting with ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd not found: ID does not exist" containerID="ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.649002 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd"} err="failed to get container status \"ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd\": rpc error: code = NotFound desc = could not find container \"ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd\": container with ID starting with ae541e25d6b0d696f059ddadee52ccdaefab89d7ee24c522665a36378b373cdd not found: ID does not exist" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.649031 4708 scope.go:117] "RemoveContainer" containerID="9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.650293 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6cfp\" (UniqueName: \"kubernetes.io/projected/259ffaeb-0da0-4c0a-af53-c67058301b51-kube-api-access-f6cfp\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.650346 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/259ffaeb-0da0-4c0a-af53-c67058301b51-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.650359 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.650371 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/259ffaeb-0da0-4c0a-af53-c67058301b51-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.653803 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" podStartSLOduration=2.45693851 podStartE2EDuration="3.653783088s" podCreationTimestamp="2026-02-27 16:58:00 +0000 UTC" firstStartedPulling="2026-02-27 16:58:02.043949399 +0000 UTC m=+280.559746996" lastFinishedPulling="2026-02-27 16:58:03.240793987 +0000 UTC m=+281.756591574" observedRunningTime="2026-02-27 16:58:03.64891447 +0000 UTC m=+282.164712057" watchObservedRunningTime="2026-02-27 16:58:03.653783088 +0000 UTC m=+282.169580675" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.682658 4708 scope.go:117] "RemoveContainer" containerID="9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4" Feb 27 16:58:03 crc kubenswrapper[4708]: E0227 16:58:03.683264 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4\": container with ID starting with 9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4 not found: ID does not exist" containerID="9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.683304 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4"} err="failed to get container status \"9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4\": rpc error: code = NotFound desc = could not find container \"9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4\": container with ID starting with 9355c5bb3a0f0e0812bdf239f157cd4f7797f596c7e4e7741a5e89a919f3c2b4 not found: ID does not exist" Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.696951 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8"] Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.703750 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-d4dcd9fb7-rfbt8"] Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.708279 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4"] Feb 27 16:58:03 crc kubenswrapper[4708]: I0227 16:58:03.710981 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-656589f589-xw5p4"] Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.180258 4708 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-12-21 22:47:43.20969145 +0000 UTC Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.180620 4708 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7133h49m39.029074146s for next certificate rotation Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.235971 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1739fbfb-7603-41f7-ac9c-43d99a3ad069" path="/var/lib/kubelet/pods/1739fbfb-7603-41f7-ac9c-43d99a3ad069/volumes" Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.237922 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="259ffaeb-0da0-4c0a-af53-c67058301b51" path="/var/lib/kubelet/pods/259ffaeb-0da0-4c0a-af53-c67058301b51/volumes" Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.653912 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" event={"ID":"799aa1e9-629d-481e-b62a-fd8e717acc6b","Type":"ContainerStarted","Data":"14608b95c26287a996396e9d6c80a8d07d401713ca295bed224f774de333adbd"} Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.654440 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.657795 4708 generic.go:334] "Generic (PLEG): container finished" podID="c99fdbbd-b661-4920-975d-c72e040d08fa" containerID="7c3afdfc9ffdea31879ef9a422441c28ca30755918ab57761fd5f28be8a5469c" exitCode=0 Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.657888 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" event={"ID":"c99fdbbd-b661-4920-975d-c72e040d08fa","Type":"ContainerDied","Data":"7c3afdfc9ffdea31879ef9a422441c28ca30755918ab57761fd5f28be8a5469c"} Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.662642 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.683023 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" podStartSLOduration=9.682998147 podStartE2EDuration="9.682998147s" podCreationTimestamp="2026-02-27 16:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:04.677896643 +0000 UTC m=+283.193694250" watchObservedRunningTime="2026-02-27 16:58:04.682998147 +0000 UTC m=+283.198795734" Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.986024 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-lj688" Feb 27 16:58:04 crc kubenswrapper[4708]: I0227 16:58:04.992405 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.167082 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwdq\" (UniqueName: \"kubernetes.io/projected/c8c016d5-5c1f-4680-a678-8568d218617e-kube-api-access-mjwdq\") pod \"c8c016d5-5c1f-4680-a678-8568d218617e\" (UID: \"c8c016d5-5c1f-4680-a678-8568d218617e\") " Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.167165 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kubelet-dir\") pod \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.167254 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kube-api-access\") pod \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\" (UID: \"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14\") " Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.167346 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "19acf0cc-9d56-41e7-a7cb-4fe6d3695a14" (UID: "19acf0cc-9d56-41e7-a7cb-4fe6d3695a14"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.167771 4708 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.177128 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8c016d5-5c1f-4680-a678-8568d218617e-kube-api-access-mjwdq" (OuterVolumeSpecName: "kube-api-access-mjwdq") pod "c8c016d5-5c1f-4680-a678-8568d218617e" (UID: "c8c016d5-5c1f-4680-a678-8568d218617e"). InnerVolumeSpecName "kube-api-access-mjwdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.177301 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "19acf0cc-9d56-41e7-a7cb-4fe6d3695a14" (UID: "19acf0cc-9d56-41e7-a7cb-4fe6d3695a14"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.180940 4708 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-02-24 05:54:36 +0000 UTC, rotation deadline is 2026-11-27 15:53:14.511220293 +0000 UTC Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.180980 4708 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6550h55m9.33024493s for next certificate rotation Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.269626 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/19acf0cc-9d56-41e7-a7cb-4fe6d3695a14-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.269665 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mjwdq\" (UniqueName: \"kubernetes.io/projected/c8c016d5-5c1f-4680-a678-8568d218617e-kube-api-access-mjwdq\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.605654 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b"] Feb 27 16:58:05 crc kubenswrapper[4708]: E0227 16:58:05.606679 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8c016d5-5c1f-4680-a678-8568d218617e" containerName="oc" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.606801 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8c016d5-5c1f-4680-a678-8568d218617e" containerName="oc" Feb 27 16:58:05 crc kubenswrapper[4708]: E0227 16:58:05.606928 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="259ffaeb-0da0-4c0a-af53-c67058301b51" containerName="route-controller-manager" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.607006 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="259ffaeb-0da0-4c0a-af53-c67058301b51" containerName="route-controller-manager" Feb 27 16:58:05 crc kubenswrapper[4708]: E0227 16:58:05.607093 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19acf0cc-9d56-41e7-a7cb-4fe6d3695a14" containerName="pruner" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.607164 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="19acf0cc-9d56-41e7-a7cb-4fe6d3695a14" containerName="pruner" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.607328 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="259ffaeb-0da0-4c0a-af53-c67058301b51" containerName="route-controller-manager" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.607395 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="19acf0cc-9d56-41e7-a7cb-4fe6d3695a14" containerName="pruner" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.607457 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8c016d5-5c1f-4680-a678-8568d218617e" containerName="oc" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.608668 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.616515 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.619052 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.619493 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.619923 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.622333 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.629611 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.631487 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.631614 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.631827 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.635980 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.636057 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f" gracePeriod=600 Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.641999 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b"] Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.669367 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"19acf0cc-9d56-41e7-a7cb-4fe6d3695a14","Type":"ContainerDied","Data":"19c2ebfd7f90390746318a5ddf9c75aef3b324005f9fd2f3dd0172aede7cf48d"} Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.669421 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="19c2ebfd7f90390746318a5ddf9c75aef3b324005f9fd2f3dd0172aede7cf48d" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.669374 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.670917 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536856-lj688" event={"ID":"c8c016d5-5c1f-4680-a678-8568d218617e","Type":"ContainerDied","Data":"d016c361f638fa2f9f6f5815497606dba972a6bb048115c899b00f3cee0f2f00"} Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.670983 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d016c361f638fa2f9f6f5815497606dba972a6bb048115c899b00f3cee0f2f00" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.671072 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536856-lj688" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.776735 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-serving-cert\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.776803 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-config\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.776924 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-client-ca\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.777000 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqgpc\" (UniqueName: \"kubernetes.io/projected/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-kube-api-access-nqgpc\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.878400 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqgpc\" (UniqueName: \"kubernetes.io/projected/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-kube-api-access-nqgpc\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.878670 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-serving-cert\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.878702 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-config\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.879557 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-client-ca\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.880319 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-client-ca\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.880636 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-config\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.885343 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-serving-cert\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.898553 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqgpc\" (UniqueName: \"kubernetes.io/projected/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-kube-api-access-nqgpc\") pod \"route-controller-manager-5d46996964-mk82b\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.907782 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" Feb 27 16:58:05 crc kubenswrapper[4708]: I0227 16:58:05.945278 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.082403 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dftml\" (UniqueName: \"kubernetes.io/projected/c99fdbbd-b661-4920-975d-c72e040d08fa-kube-api-access-dftml\") pod \"c99fdbbd-b661-4920-975d-c72e040d08fa\" (UID: \"c99fdbbd-b661-4920-975d-c72e040d08fa\") " Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.088929 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c99fdbbd-b661-4920-975d-c72e040d08fa-kube-api-access-dftml" (OuterVolumeSpecName: "kube-api-access-dftml") pod "c99fdbbd-b661-4920-975d-c72e040d08fa" (UID: "c99fdbbd-b661-4920-975d-c72e040d08fa"). InnerVolumeSpecName "kube-api-access-dftml". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.143083 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b"] Feb 27 16:58:06 crc kubenswrapper[4708]: W0227 16:58:06.154283 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef0f2141_43b6_4dba_be79_c5c88b3e73ea.slice/crio-bc71f2c4b1163890b7f63e881ddc42f99a19b15df7e66955609154f36a111315 WatchSource:0}: Error finding container bc71f2c4b1163890b7f63e881ddc42f99a19b15df7e66955609154f36a111315: Status 404 returned error can't find the container with id bc71f2c4b1163890b7f63e881ddc42f99a19b15df7e66955609154f36a111315 Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.185453 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dftml\" (UniqueName: \"kubernetes.io/projected/c99fdbbd-b661-4920-975d-c72e040d08fa-kube-api-access-dftml\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.679544 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" event={"ID":"ef0f2141-43b6-4dba-be79-c5c88b3e73ea","Type":"ContainerStarted","Data":"27220f6cca2044ecddf276f30a6922014dfc4833103f633f91806148743402ef"} Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.679827 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" event={"ID":"ef0f2141-43b6-4dba-be79-c5c88b3e73ea","Type":"ContainerStarted","Data":"bc71f2c4b1163890b7f63e881ddc42f99a19b15df7e66955609154f36a111315"} Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.679858 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.685343 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.685387 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536858-rfn4q" event={"ID":"c99fdbbd-b661-4920-975d-c72e040d08fa","Type":"ContainerDied","Data":"c7d53ff44f0a6a3e0190f7e36a09ed8e814e5b460f7c778dc80148e02442136d"} Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.685445 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7d53ff44f0a6a3e0190f7e36a09ed8e814e5b460f7c778dc80148e02442136d" Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.688288 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f" exitCode=0 Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.688548 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f"} Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.688632 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"bded9136f5ebbabd06a46307fbd007f7b15f87dcb532cd3c37c1fe08d4c6e0ab"} Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.704987 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" podStartSLOduration=11.704973985 podStartE2EDuration="11.704973985s" podCreationTimestamp="2026-02-27 16:57:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:06.700500777 +0000 UTC m=+285.216298374" watchObservedRunningTime="2026-02-27 16:58:06.704973985 +0000 UTC m=+285.220771582" Feb 27 16:58:06 crc kubenswrapper[4708]: I0227 16:58:06.717040 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:13 crc kubenswrapper[4708]: I0227 16:58:13.751331 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmm5v" event={"ID":"b091d644-ad3d-4b63-976d-16e3c0caa3e4","Type":"ContainerStarted","Data":"717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66"} Feb 27 16:58:14 crc kubenswrapper[4708]: I0227 16:58:14.760903 4708 generic.go:334] "Generic (PLEG): container finished" podID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerID="717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66" exitCode=0 Feb 27 16:58:14 crc kubenswrapper[4708]: I0227 16:58:14.760989 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmm5v" event={"ID":"b091d644-ad3d-4b63-976d-16e3c0caa3e4","Type":"ContainerDied","Data":"717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66"} Feb 27 16:58:15 crc kubenswrapper[4708]: I0227 16:58:15.663648 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79f56464fc-mm9fm"] Feb 27 16:58:15 crc kubenswrapper[4708]: I0227 16:58:15.664074 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" podUID="799aa1e9-629d-481e-b62a-fd8e717acc6b" containerName="controller-manager" containerID="cri-o://14608b95c26287a996396e9d6c80a8d07d401713ca295bed224f774de333adbd" gracePeriod=30 Feb 27 16:58:15 crc kubenswrapper[4708]: I0227 16:58:15.676255 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b"] Feb 27 16:58:15 crc kubenswrapper[4708]: I0227 16:58:15.676680 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" podUID="ef0f2141-43b6-4dba-be79-c5c88b3e73ea" containerName="route-controller-manager" containerID="cri-o://27220f6cca2044ecddf276f30a6922014dfc4833103f633f91806148743402ef" gracePeriod=30 Feb 27 16:58:15 crc kubenswrapper[4708]: I0227 16:58:15.946774 4708 patch_prober.go:28] interesting pod/route-controller-manager-5d46996964-mk82b container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" start-of-body= Feb 27 16:58:15 crc kubenswrapper[4708]: I0227 16:58:15.947237 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" podUID="ef0f2141-43b6-4dba-be79-c5c88b3e73ea" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.64:8443/healthz\": dial tcp 10.217.0.64:8443: connect: connection refused" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.781801 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef0f2141-43b6-4dba-be79-c5c88b3e73ea" containerID="27220f6cca2044ecddf276f30a6922014dfc4833103f633f91806148743402ef" exitCode=0 Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.781927 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" event={"ID":"ef0f2141-43b6-4dba-be79-c5c88b3e73ea","Type":"ContainerDied","Data":"27220f6cca2044ecddf276f30a6922014dfc4833103f633f91806148743402ef"} Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.790092 4708 generic.go:334] "Generic (PLEG): container finished" podID="799aa1e9-629d-481e-b62a-fd8e717acc6b" containerID="14608b95c26287a996396e9d6c80a8d07d401713ca295bed224f774de333adbd" exitCode=0 Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.790150 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" event={"ID":"799aa1e9-629d-481e-b62a-fd8e717acc6b","Type":"ContainerDied","Data":"14608b95c26287a996396e9d6c80a8d07d401713ca295bed224f774de333adbd"} Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.790187 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" event={"ID":"799aa1e9-629d-481e-b62a-fd8e717acc6b","Type":"ContainerDied","Data":"8de0acf2f9c829569d4b4caf0248e0bf610c34ef30957c0f254cc01ab46d64e6"} Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.790208 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de0acf2f9c829569d4b4caf0248e0bf610c34ef30957c0f254cc01ab46d64e6" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.827953 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.865243 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-config\") pod \"799aa1e9-629d-481e-b62a-fd8e717acc6b\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.865322 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-proxy-ca-bundles\") pod \"799aa1e9-629d-481e-b62a-fd8e717acc6b\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.865398 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp9vn\" (UniqueName: \"kubernetes.io/projected/799aa1e9-629d-481e-b62a-fd8e717acc6b-kube-api-access-pp9vn\") pod \"799aa1e9-629d-481e-b62a-fd8e717acc6b\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.865423 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-client-ca\") pod \"799aa1e9-629d-481e-b62a-fd8e717acc6b\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.865477 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799aa1e9-629d-481e-b62a-fd8e717acc6b-serving-cert\") pod \"799aa1e9-629d-481e-b62a-fd8e717acc6b\" (UID: \"799aa1e9-629d-481e-b62a-fd8e717acc6b\") " Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.873117 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "799aa1e9-629d-481e-b62a-fd8e717acc6b" (UID: "799aa1e9-629d-481e-b62a-fd8e717acc6b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.873950 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-client-ca" (OuterVolumeSpecName: "client-ca") pod "799aa1e9-629d-481e-b62a-fd8e717acc6b" (UID: "799aa1e9-629d-481e-b62a-fd8e717acc6b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.874332 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-config" (OuterVolumeSpecName: "config") pod "799aa1e9-629d-481e-b62a-fd8e717acc6b" (UID: "799aa1e9-629d-481e-b62a-fd8e717acc6b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.877409 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-848b8f857b-l8l7m"] Feb 27 16:58:16 crc kubenswrapper[4708]: E0227 16:58:16.877664 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c99fdbbd-b661-4920-975d-c72e040d08fa" containerName="oc" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.877685 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c99fdbbd-b661-4920-975d-c72e040d08fa" containerName="oc" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.877675 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/799aa1e9-629d-481e-b62a-fd8e717acc6b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "799aa1e9-629d-481e-b62a-fd8e717acc6b" (UID: "799aa1e9-629d-481e-b62a-fd8e717acc6b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:16 crc kubenswrapper[4708]: E0227 16:58:16.877717 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="799aa1e9-629d-481e-b62a-fd8e717acc6b" containerName="controller-manager" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.877799 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="799aa1e9-629d-481e-b62a-fd8e717acc6b" containerName="controller-manager" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.878183 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="799aa1e9-629d-481e-b62a-fd8e717acc6b" containerName="controller-manager" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.878214 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c99fdbbd-b661-4920-975d-c72e040d08fa" containerName="oc" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.878649 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/799aa1e9-629d-481e-b62a-fd8e717acc6b-kube-api-access-pp9vn" (OuterVolumeSpecName: "kube-api-access-pp9vn") pod "799aa1e9-629d-481e-b62a-fd8e717acc6b" (UID: "799aa1e9-629d-481e-b62a-fd8e717acc6b"). InnerVolumeSpecName "kube-api-access-pp9vn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.887077 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.891830 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-848b8f857b-l8l7m"] Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967498 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-client-ca\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967549 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-proxy-ca-bundles\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967715 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-config\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967830 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7cts\" (UniqueName: \"kubernetes.io/projected/0e543647-6667-4cf9-b8b4-72c3e268e85a-kube-api-access-m7cts\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967878 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e543647-6667-4cf9-b8b4-72c3e268e85a-serving-cert\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967926 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967944 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967958 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp9vn\" (UniqueName: \"kubernetes.io/projected/799aa1e9-629d-481e-b62a-fd8e717acc6b-kube-api-access-pp9vn\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967970 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/799aa1e9-629d-481e-b62a-fd8e717acc6b-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:16 crc kubenswrapper[4708]: I0227 16:58:16.967981 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/799aa1e9-629d-481e-b62a-fd8e717acc6b-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.068774 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7cts\" (UniqueName: \"kubernetes.io/projected/0e543647-6667-4cf9-b8b4-72c3e268e85a-kube-api-access-m7cts\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.069449 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e543647-6667-4cf9-b8b4-72c3e268e85a-serving-cert\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.069527 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-client-ca\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.069566 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-proxy-ca-bundles\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.069615 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-config\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.072249 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-client-ca\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.073183 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-proxy-ca-bundles\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.073426 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-config\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.078960 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e543647-6667-4cf9-b8b4-72c3e268e85a-serving-cert\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.089719 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7cts\" (UniqueName: \"kubernetes.io/projected/0e543647-6667-4cf9-b8b4-72c3e268e85a-kube-api-access-m7cts\") pod \"controller-manager-848b8f857b-l8l7m\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.228520 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.796668 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-79f56464fc-mm9fm" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.851701 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.873384 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-79f56464fc-mm9fm"] Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.878956 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-79f56464fc-mm9fm"] Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.880960 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-config\") pod \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.881124 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqgpc\" (UniqueName: \"kubernetes.io/projected/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-kube-api-access-nqgpc\") pod \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.881179 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-client-ca\") pod \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.881309 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-serving-cert\") pod \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\" (UID: \"ef0f2141-43b6-4dba-be79-c5c88b3e73ea\") " Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.883672 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef0f2141-43b6-4dba-be79-c5c88b3e73ea" (UID: "ef0f2141-43b6-4dba-be79-c5c88b3e73ea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.884036 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-config" (OuterVolumeSpecName: "config") pod "ef0f2141-43b6-4dba-be79-c5c88b3e73ea" (UID: "ef0f2141-43b6-4dba-be79-c5c88b3e73ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.891333 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef0f2141-43b6-4dba-be79-c5c88b3e73ea" (UID: "ef0f2141-43b6-4dba-be79-c5c88b3e73ea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.891348 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-kube-api-access-nqgpc" (OuterVolumeSpecName: "kube-api-access-nqgpc") pod "ef0f2141-43b6-4dba-be79-c5c88b3e73ea" (UID: "ef0f2141-43b6-4dba-be79-c5c88b3e73ea"). InnerVolumeSpecName "kube-api-access-nqgpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.982742 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.982778 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.982791 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqgpc\" (UniqueName: \"kubernetes.io/projected/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-kube-api-access-nqgpc\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:17 crc kubenswrapper[4708]: I0227 16:58:17.982804 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef0f2141-43b6-4dba-be79-c5c88b3e73ea-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:18 crc kubenswrapper[4708]: I0227 16:58:18.237819 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="799aa1e9-629d-481e-b62a-fd8e717acc6b" path="/var/lib/kubelet/pods/799aa1e9-629d-481e-b62a-fd8e717acc6b/volumes" Feb 27 16:58:18 crc kubenswrapper[4708]: I0227 16:58:18.598531 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-55dsj"] Feb 27 16:58:18 crc kubenswrapper[4708]: I0227 16:58:18.801673 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" event={"ID":"ef0f2141-43b6-4dba-be79-c5c88b3e73ea","Type":"ContainerDied","Data":"bc71f2c4b1163890b7f63e881ddc42f99a19b15df7e66955609154f36a111315"} Feb 27 16:58:18 crc kubenswrapper[4708]: I0227 16:58:18.801721 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b" Feb 27 16:58:18 crc kubenswrapper[4708]: I0227 16:58:18.801725 4708 scope.go:117] "RemoveContainer" containerID="27220f6cca2044ecddf276f30a6922014dfc4833103f633f91806148743402ef" Feb 27 16:58:18 crc kubenswrapper[4708]: I0227 16:58:18.837971 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b"] Feb 27 16:58:18 crc kubenswrapper[4708]: I0227 16:58:18.840433 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d46996964-mk82b"] Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.623111 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25"] Feb 27 16:58:19 crc kubenswrapper[4708]: E0227 16:58:19.623526 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0f2141-43b6-4dba-be79-c5c88b3e73ea" containerName="route-controller-manager" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.623552 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0f2141-43b6-4dba-be79-c5c88b3e73ea" containerName="route-controller-manager" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.623778 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0f2141-43b6-4dba-be79-c5c88b3e73ea" containerName="route-controller-manager" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.624597 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.630530 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.632108 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.632199 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.632550 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.632741 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.632807 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.639127 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25"] Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.716979 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac34d47a-8aa8-488b-b250-43ca858b513f-serving-cert\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.717062 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr9tr\" (UniqueName: \"kubernetes.io/projected/ac34d47a-8aa8-488b-b250-43ca858b513f-kube-api-access-gr9tr\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.717105 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-config\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.717147 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-client-ca\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.818732 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac34d47a-8aa8-488b-b250-43ca858b513f-serving-cert\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.818813 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gr9tr\" (UniqueName: \"kubernetes.io/projected/ac34d47a-8aa8-488b-b250-43ca858b513f-kube-api-access-gr9tr\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.818898 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-config\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.818934 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-client-ca\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.820649 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-client-ca\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.822178 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-config\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.828761 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac34d47a-8aa8-488b-b250-43ca858b513f-serving-cert\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.850302 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gr9tr\" (UniqueName: \"kubernetes.io/projected/ac34d47a-8aa8-488b-b250-43ca858b513f-kube-api-access-gr9tr\") pod \"route-controller-manager-64c7765488-kdx25\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:19 crc kubenswrapper[4708]: I0227 16:58:19.958147 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.240445 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0f2141-43b6-4dba-be79-c5c88b3e73ea" path="/var/lib/kubelet/pods/ef0f2141-43b6-4dba-be79-c5c88b3e73ea/volumes" Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.765299 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-848b8f857b-l8l7m"] Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.878993 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25"] Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.882293 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw5dq" event={"ID":"70493bd3-d5c2-49e2-bd00-ac98325a2187","Type":"ContainerStarted","Data":"1e14e82409ab8d09acca1fde8ef3efe016f58d71bfd8b7f2a7adb4068664c10c"} Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.903412 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvqlm" event={"ID":"5710135c-fd59-4ff6-b74a-ad7ab8730aff","Type":"ContainerStarted","Data":"4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7"} Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.911183 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rtdw" event={"ID":"9b733486-f273-4bd5-afa3-d35d3d1feafc","Type":"ContainerStarted","Data":"70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec"} Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.928278 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j29cw" event={"ID":"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db","Type":"ContainerStarted","Data":"8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676"} Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.930645 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmzsx" event={"ID":"96160365-88cf-419c-a2d2-04818cde5016","Type":"ContainerStarted","Data":"4d1a7f7d50dc287f86aef8a570e7dd7ec147b73a9b99c9ce8a69153aec0236cc"} Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.934058 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmm5v" event={"ID":"b091d644-ad3d-4b63-976d-16e3c0caa3e4","Type":"ContainerStarted","Data":"a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343"} Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.940761 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" event={"ID":"0e543647-6667-4cf9-b8b4-72c3e268e85a","Type":"ContainerStarted","Data":"651017fd3c9f52f02ac115f580877303265328aed5aec6572828c6e3a6819fcf"} Feb 27 16:58:20 crc kubenswrapper[4708]: I0227 16:58:20.959350 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xmm5v" podStartSLOduration=3.21099797 podStartE2EDuration="1m3.959312765s" podCreationTimestamp="2026-02-27 16:57:17 +0000 UTC" firstStartedPulling="2026-02-27 16:57:19.68894496 +0000 UTC m=+238.204742547" lastFinishedPulling="2026-02-27 16:58:20.437259715 +0000 UTC m=+298.953057342" observedRunningTime="2026-02-27 16:58:20.955197957 +0000 UTC m=+299.470995544" watchObservedRunningTime="2026-02-27 16:58:20.959312765 +0000 UTC m=+299.475110352" Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.949301 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" event={"ID":"ac34d47a-8aa8-488b-b250-43ca858b513f","Type":"ContainerStarted","Data":"a0dd4fec4adb532978e7685f175fba75fc57ab2ba705cb34773040f4e02132f9"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.949688 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" event={"ID":"ac34d47a-8aa8-488b-b250-43ca858b513f","Type":"ContainerStarted","Data":"442d5f98bcc1ecbc57650eefb02d95291d765bc28e091aa163a21d89dc64a835"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.950837 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.953248 4708 generic.go:334] "Generic (PLEG): container finished" podID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerID="70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec" exitCode=0 Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.953302 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rtdw" event={"ID":"9b733486-f273-4bd5-afa3-d35d3d1feafc","Type":"ContainerDied","Data":"70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.955862 4708 generic.go:334] "Generic (PLEG): container finished" podID="5c38d70c-968f-44dd-b42b-013bc033debb" containerID="dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858" exitCode=0 Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.955900 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5lwl" event={"ID":"5c38d70c-968f-44dd-b42b-013bc033debb","Type":"ContainerDied","Data":"dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.960417 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" event={"ID":"0e543647-6667-4cf9-b8b4-72c3e268e85a","Type":"ContainerStarted","Data":"c651e5bd2c6aae8d8c6e7d21996715c588990a03991b0810a4846c102e707712"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.961001 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.963893 4708 generic.go:334] "Generic (PLEG): container finished" podID="b2d410d4-9144-42b4-96c9-345732131a7e" containerID="d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0" exitCode=0 Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.963933 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggb2w" event={"ID":"b2d410d4-9144-42b4-96c9-345732131a7e","Type":"ContainerDied","Data":"d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.964486 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.969452 4708 generic.go:334] "Generic (PLEG): container finished" podID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerID="1e14e82409ab8d09acca1fde8ef3efe016f58d71bfd8b7f2a7adb4068664c10c" exitCode=0 Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.969552 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw5dq" event={"ID":"70493bd3-d5c2-49e2-bd00-ac98325a2187","Type":"ContainerDied","Data":"1e14e82409ab8d09acca1fde8ef3efe016f58d71bfd8b7f2a7adb4068664c10c"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.970763 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.975215 4708 generic.go:334] "Generic (PLEG): container finished" podID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerID="8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676" exitCode=0 Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.975284 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j29cw" event={"ID":"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db","Type":"ContainerDied","Data":"8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.980575 4708 generic.go:334] "Generic (PLEG): container finished" podID="96160365-88cf-419c-a2d2-04818cde5016" containerID="4d1a7f7d50dc287f86aef8a570e7dd7ec147b73a9b99c9ce8a69153aec0236cc" exitCode=0 Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.980638 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmzsx" event={"ID":"96160365-88cf-419c-a2d2-04818cde5016","Type":"ContainerDied","Data":"4d1a7f7d50dc287f86aef8a570e7dd7ec147b73a9b99c9ce8a69153aec0236cc"} Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.989646 4708 generic.go:334] "Generic (PLEG): container finished" podID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerID="4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7" exitCode=0 Feb 27 16:58:21 crc kubenswrapper[4708]: I0227 16:58:21.990204 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvqlm" event={"ID":"5710135c-fd59-4ff6-b74a-ad7ab8730aff","Type":"ContainerDied","Data":"4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7"} Feb 27 16:58:22 crc kubenswrapper[4708]: I0227 16:58:22.002007 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" podStartSLOduration=7.001987208 podStartE2EDuration="7.001987208s" podCreationTimestamp="2026-02-27 16:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:21.989049168 +0000 UTC m=+300.504846795" watchObservedRunningTime="2026-02-27 16:58:22.001987208 +0000 UTC m=+300.517784815" Feb 27 16:58:22 crc kubenswrapper[4708]: I0227 16:58:22.074212 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" podStartSLOduration=7.074195627 podStartE2EDuration="7.074195627s" podCreationTimestamp="2026-02-27 16:58:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:22.073215931 +0000 UTC m=+300.589013528" watchObservedRunningTime="2026-02-27 16:58:22.074195627 +0000 UTC m=+300.589993214" Feb 27 16:58:22 crc kubenswrapper[4708]: I0227 16:58:22.997244 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggb2w" event={"ID":"b2d410d4-9144-42b4-96c9-345732131a7e","Type":"ContainerStarted","Data":"e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60"} Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.006265 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw5dq" event={"ID":"70493bd3-d5c2-49e2-bd00-ac98325a2187","Type":"ContainerStarted","Data":"ad1fba3972e5589fa62f1b19f011a4fc321def315ea9805bc0016bf849e514cf"} Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.008738 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmzsx" event={"ID":"96160365-88cf-419c-a2d2-04818cde5016","Type":"ContainerStarted","Data":"a1c2669b0f45732a8d1f0bafb53b7294fa0c3e0072e535cc2721904b5fc7b17e"} Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.011393 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rtdw" event={"ID":"9b733486-f273-4bd5-afa3-d35d3d1feafc","Type":"ContainerStarted","Data":"e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4"} Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.016886 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5lwl" event={"ID":"5c38d70c-968f-44dd-b42b-013bc033debb","Type":"ContainerStarted","Data":"c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526"} Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.019175 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvqlm" event={"ID":"5710135c-fd59-4ff6-b74a-ad7ab8730aff","Type":"ContainerStarted","Data":"d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59"} Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.023704 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ggb2w" podStartSLOduration=2.966860859 podStartE2EDuration="1m8.023686829s" podCreationTimestamp="2026-02-27 16:57:15 +0000 UTC" firstStartedPulling="2026-02-27 16:57:17.396321084 +0000 UTC m=+235.912118671" lastFinishedPulling="2026-02-27 16:58:22.453147044 +0000 UTC m=+300.968944641" observedRunningTime="2026-02-27 16:58:23.020635549 +0000 UTC m=+301.536433136" watchObservedRunningTime="2026-02-27 16:58:23.023686829 +0000 UTC m=+301.539484416" Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.026313 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j29cw" event={"ID":"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db","Type":"ContainerStarted","Data":"7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab"} Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.042122 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p5lwl" podStartSLOduration=3.092505334 podStartE2EDuration="1m7.042101093s" podCreationTimestamp="2026-02-27 16:57:16 +0000 UTC" firstStartedPulling="2026-02-27 16:57:18.459966588 +0000 UTC m=+236.975764175" lastFinishedPulling="2026-02-27 16:58:22.409562347 +0000 UTC m=+300.925359934" observedRunningTime="2026-02-27 16:58:23.039158626 +0000 UTC m=+301.554956213" watchObservedRunningTime="2026-02-27 16:58:23.042101093 +0000 UTC m=+301.557898680" Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.078985 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hw5dq" podStartSLOduration=2.9584910779999998 podStartE2EDuration="1m8.078968662s" podCreationTimestamp="2026-02-27 16:57:15 +0000 UTC" firstStartedPulling="2026-02-27 16:57:17.341583814 +0000 UTC m=+235.857381401" lastFinishedPulling="2026-02-27 16:58:22.462061388 +0000 UTC m=+300.977858985" observedRunningTime="2026-02-27 16:58:23.077919664 +0000 UTC m=+301.593717251" watchObservedRunningTime="2026-02-27 16:58:23.078968662 +0000 UTC m=+301.594766249" Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.079887 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lmzsx" podStartSLOduration=3.107752104 podStartE2EDuration="1m5.079881246s" podCreationTimestamp="2026-02-27 16:57:18 +0000 UTC" firstStartedPulling="2026-02-27 16:57:20.700612927 +0000 UTC m=+239.216410514" lastFinishedPulling="2026-02-27 16:58:22.672742059 +0000 UTC m=+301.188539656" observedRunningTime="2026-02-27 16:58:23.059948802 +0000 UTC m=+301.575746389" watchObservedRunningTime="2026-02-27 16:58:23.079881246 +0000 UTC m=+301.595678833" Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.098986 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7rtdw" podStartSLOduration=3.978842984 podStartE2EDuration="1m9.098968308s" podCreationTimestamp="2026-02-27 16:57:14 +0000 UTC" firstStartedPulling="2026-02-27 16:57:17.376834522 +0000 UTC m=+235.892632109" lastFinishedPulling="2026-02-27 16:58:22.496959846 +0000 UTC m=+301.012757433" observedRunningTime="2026-02-27 16:58:23.097811868 +0000 UTC m=+301.613609455" watchObservedRunningTime="2026-02-27 16:58:23.098968308 +0000 UTC m=+301.614765895" Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.117401 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zvqlm" podStartSLOduration=3.00762552 podStartE2EDuration="1m8.117385322s" podCreationTimestamp="2026-02-27 16:57:15 +0000 UTC" firstStartedPulling="2026-02-27 16:57:17.370111525 +0000 UTC m=+235.885909122" lastFinishedPulling="2026-02-27 16:58:22.479871337 +0000 UTC m=+300.995668924" observedRunningTime="2026-02-27 16:58:23.115977635 +0000 UTC m=+301.631775222" watchObservedRunningTime="2026-02-27 16:58:23.117385322 +0000 UTC m=+301.633182909" Feb 27 16:58:23 crc kubenswrapper[4708]: I0227 16:58:23.135977 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-j29cw" podStartSLOduration=3.008079803 podStartE2EDuration="1m5.135961581s" podCreationTimestamp="2026-02-27 16:57:18 +0000 UTC" firstStartedPulling="2026-02-27 16:57:20.683502137 +0000 UTC m=+239.199299724" lastFinishedPulling="2026-02-27 16:58:22.811383915 +0000 UTC m=+301.327181502" observedRunningTime="2026-02-27 16:58:23.13515976 +0000 UTC m=+301.650957347" watchObservedRunningTime="2026-02-27 16:58:23.135961581 +0000 UTC m=+301.651759168" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.247681 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.248018 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.414606 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.587365 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.587430 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.629321 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.805386 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.805457 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.859292 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.969176 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:58:25 crc kubenswrapper[4708]: I0227 16:58:25.969268 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:58:26 crc kubenswrapper[4708]: I0227 16:58:26.044561 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:58:27 crc kubenswrapper[4708]: I0227 16:58:27.429634 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:58:27 crc kubenswrapper[4708]: I0227 16:58:27.430125 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:58:27 crc kubenswrapper[4708]: I0227 16:58:27.492825 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:58:27 crc kubenswrapper[4708]: I0227 16:58:27.763734 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:58:27 crc kubenswrapper[4708]: I0227 16:58:27.763890 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:58:27 crc kubenswrapper[4708]: I0227 16:58:27.831702 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:58:28 crc kubenswrapper[4708]: I0227 16:58:28.120100 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 16:58:28 crc kubenswrapper[4708]: I0227 16:58:28.127802 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:58:28 crc kubenswrapper[4708]: I0227 16:58:28.608097 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:58:28 crc kubenswrapper[4708]: I0227 16:58:28.608140 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:58:28 crc kubenswrapper[4708]: I0227 16:58:28.989223 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:58:28 crc kubenswrapper[4708]: I0227 16:58:28.989282 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:58:29 crc kubenswrapper[4708]: I0227 16:58:29.654898 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lmzsx" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="registry-server" probeResult="failure" output=< Feb 27 16:58:29 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 16:58:29 crc kubenswrapper[4708]: > Feb 27 16:58:30 crc kubenswrapper[4708]: I0227 16:58:30.049968 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-j29cw" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="registry-server" probeResult="failure" output=< Feb 27 16:58:30 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 16:58:30 crc kubenswrapper[4708]: > Feb 27 16:58:31 crc kubenswrapper[4708]: I0227 16:58:31.477493 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmm5v"] Feb 27 16:58:31 crc kubenswrapper[4708]: I0227 16:58:31.478196 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xmm5v" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="registry-server" containerID="cri-o://a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343" gracePeriod=2 Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.051288 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.085820 4708 generic.go:334] "Generic (PLEG): container finished" podID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerID="a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343" exitCode=0 Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.085913 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmm5v" event={"ID":"b091d644-ad3d-4b63-976d-16e3c0caa3e4","Type":"ContainerDied","Data":"a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343"} Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.085986 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xmm5v" event={"ID":"b091d644-ad3d-4b63-976d-16e3c0caa3e4","Type":"ContainerDied","Data":"2af99ec97af2e9143f059e117dd6422e291d2e52ee08c7641653b798c8e2b802"} Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.085996 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xmm5v" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.086025 4708 scope.go:117] "RemoveContainer" containerID="a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.109929 4708 scope.go:117] "RemoveContainer" containerID="717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.116016 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d8m5l\" (UniqueName: \"kubernetes.io/projected/b091d644-ad3d-4b63-976d-16e3c0caa3e4-kube-api-access-d8m5l\") pod \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.116067 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-catalog-content\") pod \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.116147 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-utilities\") pod \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\" (UID: \"b091d644-ad3d-4b63-976d-16e3c0caa3e4\") " Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.117977 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-utilities" (OuterVolumeSpecName: "utilities") pod "b091d644-ad3d-4b63-976d-16e3c0caa3e4" (UID: "b091d644-ad3d-4b63-976d-16e3c0caa3e4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.123343 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b091d644-ad3d-4b63-976d-16e3c0caa3e4-kube-api-access-d8m5l" (OuterVolumeSpecName: "kube-api-access-d8m5l") pod "b091d644-ad3d-4b63-976d-16e3c0caa3e4" (UID: "b091d644-ad3d-4b63-976d-16e3c0caa3e4"). InnerVolumeSpecName "kube-api-access-d8m5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.137014 4708 scope.go:117] "RemoveContainer" containerID="10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.165559 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b091d644-ad3d-4b63-976d-16e3c0caa3e4" (UID: "b091d644-ad3d-4b63-976d-16e3c0caa3e4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.183162 4708 scope.go:117] "RemoveContainer" containerID="a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343" Feb 27 16:58:32 crc kubenswrapper[4708]: E0227 16:58:32.184249 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343\": container with ID starting with a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343 not found: ID does not exist" containerID="a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.184356 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343"} err="failed to get container status \"a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343\": rpc error: code = NotFound desc = could not find container \"a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343\": container with ID starting with a75ea228576d9fed0de6ef49626a6370008794acf850a0307db1a957a71d3343 not found: ID does not exist" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.184433 4708 scope.go:117] "RemoveContainer" containerID="717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66" Feb 27 16:58:32 crc kubenswrapper[4708]: E0227 16:58:32.185101 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66\": container with ID starting with 717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66 not found: ID does not exist" containerID="717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.185147 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66"} err="failed to get container status \"717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66\": rpc error: code = NotFound desc = could not find container \"717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66\": container with ID starting with 717ed8529aee4e260158c5003a87c8e7c5c3c470baa160be266d80065e0aec66 not found: ID does not exist" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.185183 4708 scope.go:117] "RemoveContainer" containerID="10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d" Feb 27 16:58:32 crc kubenswrapper[4708]: E0227 16:58:32.185762 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d\": container with ID starting with 10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d not found: ID does not exist" containerID="10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.185808 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d"} err="failed to get container status \"10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d\": rpc error: code = NotFound desc = could not find container \"10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d\": container with ID starting with 10c40988bcc6330c55dff71d8a6617653932025406d335df054cd283c61e379d not found: ID does not exist" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.220698 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d8m5l\" (UniqueName: \"kubernetes.io/projected/b091d644-ad3d-4b63-976d-16e3c0caa3e4-kube-api-access-d8m5l\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.220741 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.220753 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b091d644-ad3d-4b63-976d-16e3c0caa3e4-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.410533 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmm5v"] Feb 27 16:58:32 crc kubenswrapper[4708]: I0227 16:58:32.420071 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xmm5v"] Feb 27 16:58:34 crc kubenswrapper[4708]: I0227 16:58:34.239823 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" path="/var/lib/kubelet/pods/b091d644-ad3d-4b63-976d-16e3c0caa3e4/volumes" Feb 27 16:58:35 crc kubenswrapper[4708]: I0227 16:58:35.328472 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 16:58:35 crc kubenswrapper[4708]: I0227 16:58:35.684419 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-848b8f857b-l8l7m"] Feb 27 16:58:35 crc kubenswrapper[4708]: I0227 16:58:35.684931 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" podUID="0e543647-6667-4cf9-b8b4-72c3e268e85a" containerName="controller-manager" containerID="cri-o://c651e5bd2c6aae8d8c6e7d21996715c588990a03991b0810a4846c102e707712" gracePeriod=30 Feb 27 16:58:35 crc kubenswrapper[4708]: I0227 16:58:35.694647 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:58:35 crc kubenswrapper[4708]: I0227 16:58:35.767431 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25"] Feb 27 16:58:35 crc kubenswrapper[4708]: I0227 16:58:35.767626 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" podUID="ac34d47a-8aa8-488b-b250-43ca858b513f" containerName="route-controller-manager" containerID="cri-o://a0dd4fec4adb532978e7685f175fba75fc57ab2ba705cb34773040f4e02132f9" gracePeriod=30 Feb 27 16:58:35 crc kubenswrapper[4708]: I0227 16:58:35.840268 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 16:58:36 crc kubenswrapper[4708]: I0227 16:58:36.021638 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.124673 4708 generic.go:334] "Generic (PLEG): container finished" podID="0e543647-6667-4cf9-b8b4-72c3e268e85a" containerID="c651e5bd2c6aae8d8c6e7d21996715c588990a03991b0810a4846c102e707712" exitCode=0 Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.124830 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" event={"ID":"0e543647-6667-4cf9-b8b4-72c3e268e85a","Type":"ContainerDied","Data":"c651e5bd2c6aae8d8c6e7d21996715c588990a03991b0810a4846c102e707712"} Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.127132 4708 generic.go:334] "Generic (PLEG): container finished" podID="ac34d47a-8aa8-488b-b250-43ca858b513f" containerID="a0dd4fec4adb532978e7685f175fba75fc57ab2ba705cb34773040f4e02132f9" exitCode=0 Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.127196 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" event={"ID":"ac34d47a-8aa8-488b-b250-43ca858b513f","Type":"ContainerDied","Data":"a0dd4fec4adb532978e7685f175fba75fc57ab2ba705cb34773040f4e02132f9"} Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.230108 4708 patch_prober.go:28] interesting pod/controller-manager-848b8f857b-l8l7m container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.230178 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" podUID="0e543647-6667-4cf9-b8b4-72c3e268e85a" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.482096 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hw5dq"] Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.482548 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hw5dq" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="registry-server" containerID="cri-o://ad1fba3972e5589fa62f1b19f011a4fc321def315ea9805bc0016bf849e514cf" gracePeriod=2 Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.947330 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.951663 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.992483 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b9d8c589-g6rzm"] Feb 27 16:58:37 crc kubenswrapper[4708]: E0227 16:58:37.992816 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="extract-content" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.992843 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="extract-content" Feb 27 16:58:37 crc kubenswrapper[4708]: E0227 16:58:37.992887 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac34d47a-8aa8-488b-b250-43ca858b513f" containerName="route-controller-manager" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.992902 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac34d47a-8aa8-488b-b250-43ca858b513f" containerName="route-controller-manager" Feb 27 16:58:37 crc kubenswrapper[4708]: E0227 16:58:37.992921 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e543647-6667-4cf9-b8b4-72c3e268e85a" containerName="controller-manager" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.992936 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e543647-6667-4cf9-b8b4-72c3e268e85a" containerName="controller-manager" Feb 27 16:58:37 crc kubenswrapper[4708]: E0227 16:58:37.992967 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="registry-server" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.992980 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="registry-server" Feb 27 16:58:37 crc kubenswrapper[4708]: E0227 16:58:37.993002 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="extract-utilities" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.993014 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="extract-utilities" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.993184 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b091d644-ad3d-4b63-976d-16e3c0caa3e4" containerName="registry-server" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.993215 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e543647-6667-4cf9-b8b4-72c3e268e85a" containerName="controller-manager" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.993240 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac34d47a-8aa8-488b-b250-43ca858b513f" containerName="route-controller-manager" Feb 27 16:58:37 crc kubenswrapper[4708]: I0227 16:58:37.993800 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.003950 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-config\") pod \"0e543647-6667-4cf9-b8b4-72c3e268e85a\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004020 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-client-ca\") pod \"0e543647-6667-4cf9-b8b4-72c3e268e85a\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004075 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac34d47a-8aa8-488b-b250-43ca858b513f-serving-cert\") pod \"ac34d47a-8aa8-488b-b250-43ca858b513f\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004135 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-proxy-ca-bundles\") pod \"0e543647-6667-4cf9-b8b4-72c3e268e85a\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004195 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-client-ca\") pod \"ac34d47a-8aa8-488b-b250-43ca858b513f\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004253 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-config\") pod \"ac34d47a-8aa8-488b-b250-43ca858b513f\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004354 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gr9tr\" (UniqueName: \"kubernetes.io/projected/ac34d47a-8aa8-488b-b250-43ca858b513f-kube-api-access-gr9tr\") pod \"ac34d47a-8aa8-488b-b250-43ca858b513f\" (UID: \"ac34d47a-8aa8-488b-b250-43ca858b513f\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004407 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7cts\" (UniqueName: \"kubernetes.io/projected/0e543647-6667-4cf9-b8b4-72c3e268e85a-kube-api-access-m7cts\") pod \"0e543647-6667-4cf9-b8b4-72c3e268e85a\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.004452 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e543647-6667-4cf9-b8b4-72c3e268e85a-serving-cert\") pod \"0e543647-6667-4cf9-b8b4-72c3e268e85a\" (UID: \"0e543647-6667-4cf9-b8b4-72c3e268e85a\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.006741 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0e543647-6667-4cf9-b8b4-72c3e268e85a" (UID: "0e543647-6667-4cf9-b8b4-72c3e268e85a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.007240 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-client-ca" (OuterVolumeSpecName: "client-ca") pod "0e543647-6667-4cf9-b8b4-72c3e268e85a" (UID: "0e543647-6667-4cf9-b8b4-72c3e268e85a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.007810 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-config" (OuterVolumeSpecName: "config") pod "0e543647-6667-4cf9-b8b4-72c3e268e85a" (UID: "0e543647-6667-4cf9-b8b4-72c3e268e85a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.008607 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-config" (OuterVolumeSpecName: "config") pod "ac34d47a-8aa8-488b-b250-43ca858b513f" (UID: "ac34d47a-8aa8-488b-b250-43ca858b513f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.010376 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-client-ca" (OuterVolumeSpecName: "client-ca") pod "ac34d47a-8aa8-488b-b250-43ca858b513f" (UID: "ac34d47a-8aa8-488b-b250-43ca858b513f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.010508 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b9d8c589-g6rzm"] Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.012092 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0e543647-6667-4cf9-b8b4-72c3e268e85a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0e543647-6667-4cf9-b8b4-72c3e268e85a" (UID: "0e543647-6667-4cf9-b8b4-72c3e268e85a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.014013 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac34d47a-8aa8-488b-b250-43ca858b513f-kube-api-access-gr9tr" (OuterVolumeSpecName: "kube-api-access-gr9tr") pod "ac34d47a-8aa8-488b-b250-43ca858b513f" (UID: "ac34d47a-8aa8-488b-b250-43ca858b513f"). InnerVolumeSpecName "kube-api-access-gr9tr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.015503 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac34d47a-8aa8-488b-b250-43ca858b513f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ac34d47a-8aa8-488b-b250-43ca858b513f" (UID: "ac34d47a-8aa8-488b-b250-43ca858b513f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.017245 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e543647-6667-4cf9-b8b4-72c3e268e85a-kube-api-access-m7cts" (OuterVolumeSpecName: "kube-api-access-m7cts") pod "0e543647-6667-4cf9-b8b4-72c3e268e85a" (UID: "0e543647-6667-4cf9-b8b4-72c3e268e85a"). InnerVolumeSpecName "kube-api-access-m7cts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.073560 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ggb2w"] Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.073801 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ggb2w" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="registry-server" containerID="cri-o://e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60" gracePeriod=2 Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.106702 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-proxy-ca-bundles\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.106803 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-serving-cert\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.106912 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-config\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.106963 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsj4g\" (UniqueName: \"kubernetes.io/projected/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-kube-api-access-tsj4g\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107022 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-client-ca\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107203 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gr9tr\" (UniqueName: \"kubernetes.io/projected/ac34d47a-8aa8-488b-b250-43ca858b513f-kube-api-access-gr9tr\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107236 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7cts\" (UniqueName: \"kubernetes.io/projected/0e543647-6667-4cf9-b8b4-72c3e268e85a-kube-api-access-m7cts\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107249 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0e543647-6667-4cf9-b8b4-72c3e268e85a-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107261 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107275 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107286 4708 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac34d47a-8aa8-488b-b250-43ca858b513f-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107296 4708 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0e543647-6667-4cf9-b8b4-72c3e268e85a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107307 4708 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-client-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.107317 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac34d47a-8aa8-488b-b250-43ca858b513f-config\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.134032 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" event={"ID":"ac34d47a-8aa8-488b-b250-43ca858b513f","Type":"ContainerDied","Data":"442d5f98bcc1ecbc57650eefb02d95291d765bc28e091aa163a21d89dc64a835"} Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.134064 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.134097 4708 scope.go:117] "RemoveContainer" containerID="a0dd4fec4adb532978e7685f175fba75fc57ab2ba705cb34773040f4e02132f9" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.136825 4708 generic.go:334] "Generic (PLEG): container finished" podID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerID="ad1fba3972e5589fa62f1b19f011a4fc321def315ea9805bc0016bf849e514cf" exitCode=0 Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.137018 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw5dq" event={"ID":"70493bd3-d5c2-49e2-bd00-ac98325a2187","Type":"ContainerDied","Data":"ad1fba3972e5589fa62f1b19f011a4fc321def315ea9805bc0016bf849e514cf"} Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.138696 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" event={"ID":"0e543647-6667-4cf9-b8b4-72c3e268e85a","Type":"ContainerDied","Data":"651017fd3c9f52f02ac115f580877303265328aed5aec6572828c6e3a6819fcf"} Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.138749 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-848b8f857b-l8l7m" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.192230 4708 scope.go:117] "RemoveContainer" containerID="c651e5bd2c6aae8d8c6e7d21996715c588990a03991b0810a4846c102e707712" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.204189 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-848b8f857b-l8l7m"] Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.208729 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-proxy-ca-bundles\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.208797 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-serving-cert\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.208940 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-config\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.208984 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsj4g\" (UniqueName: \"kubernetes.io/projected/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-kube-api-access-tsj4g\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.209042 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-client-ca\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.210504 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-client-ca\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.212172 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-proxy-ca-bundles\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.212254 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-config\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.218313 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-848b8f857b-l8l7m"] Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.219716 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-serving-cert\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.227754 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25"] Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.234441 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e543647-6667-4cf9-b8b4-72c3e268e85a" path="/var/lib/kubelet/pods/0e543647-6667-4cf9-b8b4-72c3e268e85a/volumes" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.234971 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-64c7765488-kdx25"] Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.238622 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsj4g\" (UniqueName: \"kubernetes.io/projected/ccc8b136-e4a3-4c45-b79b-f8e2ef931b32-kube-api-access-tsj4g\") pod \"controller-manager-57b9d8c589-g6rzm\" (UID: \"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32\") " pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.311326 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.429194 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.512386 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gncnx\" (UniqueName: \"kubernetes.io/projected/b2d410d4-9144-42b4-96c9-345732131a7e-kube-api-access-gncnx\") pod \"b2d410d4-9144-42b4-96c9-345732131a7e\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.512433 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-utilities\") pod \"b2d410d4-9144-42b4-96c9-345732131a7e\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.512494 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-catalog-content\") pod \"b2d410d4-9144-42b4-96c9-345732131a7e\" (UID: \"b2d410d4-9144-42b4-96c9-345732131a7e\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.513665 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-utilities" (OuterVolumeSpecName: "utilities") pod "b2d410d4-9144-42b4-96c9-345732131a7e" (UID: "b2d410d4-9144-42b4-96c9-345732131a7e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.516671 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2d410d4-9144-42b4-96c9-345732131a7e-kube-api-access-gncnx" (OuterVolumeSpecName: "kube-api-access-gncnx") pod "b2d410d4-9144-42b4-96c9-345732131a7e" (UID: "b2d410d4-9144-42b4-96c9-345732131a7e"). InnerVolumeSpecName "kube-api-access-gncnx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.568383 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b2d410d4-9144-42b4-96c9-345732131a7e" (UID: "b2d410d4-9144-42b4-96c9-345732131a7e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.584240 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.613736 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gncnx\" (UniqueName: \"kubernetes.io/projected/b2d410d4-9144-42b4-96c9-345732131a7e-kube-api-access-gncnx\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.613767 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.613782 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b2d410d4-9144-42b4-96c9-345732131a7e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.685268 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.714372 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-utilities\") pod \"70493bd3-d5c2-49e2-bd00-ac98325a2187\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.714430 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44qz8\" (UniqueName: \"kubernetes.io/projected/70493bd3-d5c2-49e2-bd00-ac98325a2187-kube-api-access-44qz8\") pod \"70493bd3-d5c2-49e2-bd00-ac98325a2187\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.714516 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-catalog-content\") pod \"70493bd3-d5c2-49e2-bd00-ac98325a2187\" (UID: \"70493bd3-d5c2-49e2-bd00-ac98325a2187\") " Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.715912 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-utilities" (OuterVolumeSpecName: "utilities") pod "70493bd3-d5c2-49e2-bd00-ac98325a2187" (UID: "70493bd3-d5c2-49e2-bd00-ac98325a2187"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.719445 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70493bd3-d5c2-49e2-bd00-ac98325a2187-kube-api-access-44qz8" (OuterVolumeSpecName: "kube-api-access-44qz8") pod "70493bd3-d5c2-49e2-bd00-ac98325a2187" (UID: "70493bd3-d5c2-49e2-bd00-ac98325a2187"). InnerVolumeSpecName "kube-api-access-44qz8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.738331 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.782645 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b9d8c589-g6rzm"] Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.795068 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70493bd3-d5c2-49e2-bd00-ac98325a2187" (UID: "70493bd3-d5c2-49e2-bd00-ac98325a2187"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.815776 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.815801 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70493bd3-d5c2-49e2-bd00-ac98325a2187-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:38 crc kubenswrapper[4708]: I0227 16:58:38.815811 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44qz8\" (UniqueName: \"kubernetes.io/projected/70493bd3-d5c2-49e2-bd00-ac98325a2187-kube-api-access-44qz8\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.047178 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.113048 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.149962 4708 generic.go:334] "Generic (PLEG): container finished" podID="b2d410d4-9144-42b4-96c9-345732131a7e" containerID="e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60" exitCode=0 Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.150022 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggb2w" event={"ID":"b2d410d4-9144-42b4-96c9-345732131a7e","Type":"ContainerDied","Data":"e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60"} Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.150086 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ggb2w" event={"ID":"b2d410d4-9144-42b4-96c9-345732131a7e","Type":"ContainerDied","Data":"12b27581b23a64ecab17b77c471df8bd69ef75eac8cb3b23635dcff55e9e0a61"} Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.150086 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ggb2w" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.150110 4708 scope.go:117] "RemoveContainer" containerID="e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.157092 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hw5dq" event={"ID":"70493bd3-d5c2-49e2-bd00-ac98325a2187","Type":"ContainerDied","Data":"b5c2b20671590b20ee66a4fc8ed67bc358afc3c305b9d38a96077ef47726b3f0"} Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.157108 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hw5dq" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.159402 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" event={"ID":"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32","Type":"ContainerStarted","Data":"8231675ddd1425a332ae0b0f2a027c4f8daf7919c031dabc3407a51fc6203a18"} Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.159456 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" event={"ID":"ccc8b136-e4a3-4c45-b79b-f8e2ef931b32","Type":"ContainerStarted","Data":"170dcbb88682ced9f7fc48b1b9678715d0695c1ba09822fa44ec3171dc221e0a"} Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.172437 4708 scope.go:117] "RemoveContainer" containerID="d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.202296 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" podStartSLOduration=4.202256426 podStartE2EDuration="4.202256426s" podCreationTimestamp="2026-02-27 16:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:58:39.19555555 +0000 UTC m=+317.711353127" watchObservedRunningTime="2026-02-27 16:58:39.202256426 +0000 UTC m=+317.718054023" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.217324 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ggb2w"] Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.228492 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ggb2w"] Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.228505 4708 scope.go:117] "RemoveContainer" containerID="1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.239182 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hw5dq"] Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.241624 4708 scope.go:117] "RemoveContainer" containerID="e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.241930 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60\": container with ID starting with e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60 not found: ID does not exist" containerID="e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.241978 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60"} err="failed to get container status \"e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60\": rpc error: code = NotFound desc = could not find container \"e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60\": container with ID starting with e21b99f1eb9b1e5cf9fdad184b265532500e6cb5a2add99f4d54ed772388eb60 not found: ID does not exist" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.242008 4708 scope.go:117] "RemoveContainer" containerID="d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.242253 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0\": container with ID starting with d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0 not found: ID does not exist" containerID="d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.242286 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0"} err="failed to get container status \"d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0\": rpc error: code = NotFound desc = could not find container \"d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0\": container with ID starting with d457fedc771c627ee45c18aeeafcf6e064deac454ed3e211e64ca8b89d0e6eb0 not found: ID does not exist" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.242307 4708 scope.go:117] "RemoveContainer" containerID="1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.242485 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0\": container with ID starting with 1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0 not found: ID does not exist" containerID="1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.242509 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0"} err="failed to get container status \"1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0\": rpc error: code = NotFound desc = could not find container \"1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0\": container with ID starting with 1c592fbb6ac1a0683ea713dcfe6e9f6b8b6f72e7ac49d699cec6fa4c3389eff0 not found: ID does not exist" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.242524 4708 scope.go:117] "RemoveContainer" containerID="ad1fba3972e5589fa62f1b19f011a4fc321def315ea9805bc0016bf849e514cf" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.244949 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hw5dq"] Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.265804 4708 scope.go:117] "RemoveContainer" containerID="1e14e82409ab8d09acca1fde8ef3efe016f58d71bfd8b7f2a7adb4068664c10c" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.288027 4708 scope.go:117] "RemoveContainer" containerID="73b77b3ba08fba9c5e79d10554c013930ac929e1642a6aec712f7a06a5f693b8" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.695572 4708 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.696024 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="extract-content" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696055 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="extract-content" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.696080 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="extract-content" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696093 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="extract-content" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.696113 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="extract-utilities" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696126 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="extract-utilities" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.696156 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="extract-utilities" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696168 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="extract-utilities" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.696192 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="registry-server" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696204 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="registry-server" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.696221 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="registry-server" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696233 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="registry-server" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696411 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" containerName="registry-server" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.696432 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" containerName="registry-server" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.697319 4708 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.697516 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.698581 4708 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699012 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1" gracePeriod=15 Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699096 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961" gracePeriod=15 Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699140 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3" gracePeriod=15 Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699289 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699311 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4" gracePeriod=15 Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699340 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699372 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699386 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699401 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699414 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699448 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699460 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699482 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699495 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699515 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699528 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699545 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.698939 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20" gracePeriod=15 Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699557 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.699658 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699678 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.699986 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700017 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700033 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700048 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700064 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700084 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700103 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700116 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.700318 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700342 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: E0227 16:58:39.700357 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700370 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.700532 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.707131 4708 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.734026 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.734328 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.734506 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.737911 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.738081 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.738198 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.738351 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.738391 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.750200 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840402 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840449 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840499 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840560 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840598 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840638 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840688 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840738 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840826 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.840965 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.841022 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.841022 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.841051 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.841109 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.841120 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:39 crc kubenswrapper[4708]: I0227 16:58:39.841142 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.047721 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:58:40 crc kubenswrapper[4708]: W0227 16:58:40.079418 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-2e070920d0bf93fdc41bf1603e354a4768303c8b341c07a29fb947620c58ebf2 WatchSource:0}: Error finding container 2e070920d0bf93fdc41bf1603e354a4768303c8b341c07a29fb947620c58ebf2: Status 404 returned error can't find the container with id 2e070920d0bf93fdc41bf1603e354a4768303c8b341c07a29fb947620c58ebf2 Feb 27 16:58:40 crc kubenswrapper[4708]: E0227 16:58:40.083420 4708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189828fce95a2168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:58:40.082157928 +0000 UTC m=+318.597955545,LastTimestamp:2026-02-27 16:58:40.082157928 +0000 UTC m=+318.597955545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.170152 4708 generic.go:334] "Generic (PLEG): container finished" podID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" containerID="76379c603afdfdf3fa0393ce2b048f7789d99f779be6febce1a61b9c21428db3" exitCode=0 Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.170261 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b2eb3a2-5689-482d-82a4-b8ec5edf2418","Type":"ContainerDied","Data":"76379c603afdfdf3fa0393ce2b048f7789d99f779be6febce1a61b9c21428db3"} Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.171209 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.171598 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.172086 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"2e070920d0bf93fdc41bf1603e354a4768303c8b341c07a29fb947620c58ebf2"} Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.175241 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/3.log" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.177100 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.178098 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3" exitCode=0 Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.178140 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1" exitCode=0 Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.178158 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961" exitCode=0 Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.178172 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4" exitCode=2 Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.178243 4708 scope.go:117] "RemoveContainer" containerID="3a122c2d765dddc186722d94d6832ae12de128498e816aa7bf4c05219e69cd3e" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.181620 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.187282 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.188345 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.188894 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.189375 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.236088 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70493bd3-d5c2-49e2-bd00-ac98325a2187" path="/var/lib/kubelet/pods/70493bd3-d5c2-49e2-bd00-ac98325a2187/volumes" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.237739 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac34d47a-8aa8-488b-b250-43ca858b513f" path="/var/lib/kubelet/pods/ac34d47a-8aa8-488b-b250-43ca858b513f/volumes" Feb 27 16:58:40 crc kubenswrapper[4708]: I0227 16:58:40.238760 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2d410d4-9144-42b4-96c9-345732131a7e" path="/var/lib/kubelet/pods/b2d410d4-9144-42b4-96c9-345732131a7e/volumes" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.194477 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.759629 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.761378 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.762112 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.762805 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.877247 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kubelet-dir\") pod \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.877301 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kube-api-access\") pod \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.877418 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-var-lock\") pod \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\" (UID: \"8b2eb3a2-5689-482d-82a4-b8ec5edf2418\") " Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.877669 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-var-lock" (OuterVolumeSpecName: "var-lock") pod "8b2eb3a2-5689-482d-82a4-b8ec5edf2418" (UID: "8b2eb3a2-5689-482d-82a4-b8ec5edf2418"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.877706 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "8b2eb3a2-5689-482d-82a4-b8ec5edf2418" (UID: "8b2eb3a2-5689-482d-82a4-b8ec5edf2418"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.898396 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "8b2eb3a2-5689-482d-82a4-b8ec5edf2418" (UID: "8b2eb3a2-5689-482d-82a4-b8ec5edf2418"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.978964 4708 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.978995 4708 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kubelet-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:41 crc kubenswrapper[4708]: I0227 16:58:41.979005 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/8b2eb3a2-5689-482d-82a4-b8ec5edf2418-kube-api-access\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.073537 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.074373 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.074983 4708 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.075478 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.076013 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.076292 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.181144 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.181263 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.181340 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.181394 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.181353 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.181473 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.182253 4708 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.182291 4708 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.182312 4708 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.204568 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"8b2eb3a2-5689-482d-82a4-b8ec5edf2418","Type":"ContainerDied","Data":"f0accb6493c2316d66914436989427d32324fa6a16570099ad2661ae9c337c79"} Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.204617 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0accb6493c2316d66914436989427d32324fa6a16570099ad2661ae9c337c79" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.204632 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.206813 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"50ccc25fb701392fba2b6b461b90820ec8b4c74f3fe16296687dbf20847b1812"} Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.208081 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.208506 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.208843 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.209423 4708 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.211012 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.212177 4708 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20" exitCode=0 Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.212238 4708 scope.go:117] "RemoveContainer" containerID="d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.212651 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.233679 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.234347 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.235870 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.237567 4708 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.237663 4708 scope.go:117] "RemoveContainer" containerID="b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.242593 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.243422 4708 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.243714 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.244328 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.244969 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.252258 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.252793 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.253416 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.267053 4708 scope.go:117] "RemoveContainer" containerID="7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.294017 4708 scope.go:117] "RemoveContainer" containerID="ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4" Feb 27 16:58:42 crc kubenswrapper[4708]: E0227 16:58:42.295497 4708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189828fce95a2168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:58:40.082157928 +0000 UTC m=+318.597955545,LastTimestamp:2026-02-27 16:58:40.082157928 +0000 UTC m=+318.597955545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.318129 4708 scope.go:117] "RemoveContainer" containerID="31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.350344 4708 scope.go:117] "RemoveContainer" containerID="1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.384919 4708 scope.go:117] "RemoveContainer" containerID="d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3" Feb 27 16:58:42 crc kubenswrapper[4708]: E0227 16:58:42.385627 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\": container with ID starting with d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3 not found: ID does not exist" containerID="d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.385698 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3"} err="failed to get container status \"d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\": rpc error: code = NotFound desc = could not find container \"d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3\": container with ID starting with d458ff12b854be5010581c4c3c4bcc24b3a9c488beedaf4b0b3a7800b56d4cd3 not found: ID does not exist" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.385747 4708 scope.go:117] "RemoveContainer" containerID="b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1" Feb 27 16:58:42 crc kubenswrapper[4708]: E0227 16:58:42.386980 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\": container with ID starting with b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1 not found: ID does not exist" containerID="b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.387035 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1"} err="failed to get container status \"b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\": rpc error: code = NotFound desc = could not find container \"b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1\": container with ID starting with b4a07c90505b86d272cbd3bedec1f6692aed09b0bb5909a8c381e58dd0dd48a1 not found: ID does not exist" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.387072 4708 scope.go:117] "RemoveContainer" containerID="7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961" Feb 27 16:58:42 crc kubenswrapper[4708]: E0227 16:58:42.387692 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\": container with ID starting with 7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961 not found: ID does not exist" containerID="7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.387756 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961"} err="failed to get container status \"7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\": rpc error: code = NotFound desc = could not find container \"7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961\": container with ID starting with 7f865659001c16098006279458048f4fd315291f3f8c290f5cb99c456768a961 not found: ID does not exist" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.387798 4708 scope.go:117] "RemoveContainer" containerID="ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4" Feb 27 16:58:42 crc kubenswrapper[4708]: E0227 16:58:42.388913 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\": container with ID starting with ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4 not found: ID does not exist" containerID="ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.388962 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4"} err="failed to get container status \"ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\": rpc error: code = NotFound desc = could not find container \"ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4\": container with ID starting with ae75972a9cddc4166d7d176686d826347e904d576c837f84f954769f24cdf9b4 not found: ID does not exist" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.388997 4708 scope.go:117] "RemoveContainer" containerID="31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20" Feb 27 16:58:42 crc kubenswrapper[4708]: E0227 16:58:42.389449 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\": container with ID starting with 31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20 not found: ID does not exist" containerID="31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.389497 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20"} err="failed to get container status \"31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\": rpc error: code = NotFound desc = could not find container \"31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20\": container with ID starting with 31de7a981ecb31b6136926c9cb2637fbca3ffc7627c97329a1f33b5f55d5cb20 not found: ID does not exist" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.389527 4708 scope.go:117] "RemoveContainer" containerID="1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08" Feb 27 16:58:42 crc kubenswrapper[4708]: E0227 16:58:42.390443 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\": container with ID starting with 1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08 not found: ID does not exist" containerID="1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08" Feb 27 16:58:42 crc kubenswrapper[4708]: I0227 16:58:42.390487 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08"} err="failed to get container status \"1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\": rpc error: code = NotFound desc = could not find container \"1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08\": container with ID starting with 1a7519f97f8293416c6dfa065207c1af21dda39b33d71d5c4d0471cd44c57f08 not found: ID does not exist" Feb 27 16:58:43 crc kubenswrapper[4708]: I0227 16:58:43.698604 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" containerName="oauth-openshift" containerID="cri-o://ff32f41d589b3510c77a1e0b24957c36d285c8497a8287c361be67df1b90dc23" gracePeriod=15 Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.235054 4708 generic.go:334] "Generic (PLEG): container finished" podID="3de1e003-2dee-4d76-86cd-cd60680535bd" containerID="ff32f41d589b3510c77a1e0b24957c36d285c8497a8287c361be67df1b90dc23" exitCode=0 Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.238212 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" event={"ID":"3de1e003-2dee-4d76-86cd-cd60680535bd","Type":"ContainerDied","Data":"ff32f41d589b3510c77a1e0b24957c36d285c8497a8287c361be67df1b90dc23"} Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.330415 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.331271 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.331988 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.332743 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.333381 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418297 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-policies\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418369 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-error\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418412 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-cliconfig\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418448 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-ocp-branding-template\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418501 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-serving-cert\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418542 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-provider-selection\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418586 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-service-ca\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418637 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqrs8\" (UniqueName: \"kubernetes.io/projected/3de1e003-2dee-4d76-86cd-cd60680535bd-kube-api-access-qqrs8\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418672 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-login\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418785 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-idp-0-file-data\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418834 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-trusted-ca-bundle\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418905 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-router-certs\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418950 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-dir\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.418981 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-session\") pod \"3de1e003-2dee-4d76-86cd-cd60680535bd\" (UID: \"3de1e003-2dee-4d76-86cd-cd60680535bd\") " Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.420124 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.420162 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.420787 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.422015 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.422398 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.428095 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3de1e003-2dee-4d76-86cd-cd60680535bd-kube-api-access-qqrs8" (OuterVolumeSpecName: "kube-api-access-qqrs8") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "kube-api-access-qqrs8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.428389 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.428834 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.429124 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.429697 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.431282 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.431549 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.431771 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.432353 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "3de1e003-2dee-4d76-86cd-cd60680535bd" (UID: "3de1e003-2dee-4d76-86cd-cd60680535bd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521607 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521671 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521694 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qqrs8\" (UniqueName: \"kubernetes.io/projected/3de1e003-2dee-4d76-86cd-cd60680535bd-kube-api-access-qqrs8\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521736 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521758 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521778 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521798 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521819 4708 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521838 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521888 4708 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-audit-policies\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521907 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521924 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521943 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:44 crc kubenswrapper[4708]: I0227 16:58:44.521962 4708 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/3de1e003-2dee-4d76-86cd-cd60680535bd-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.246272 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" event={"ID":"3de1e003-2dee-4d76-86cd-cd60680535bd","Type":"ContainerDied","Data":"78d34cb0d36361901ea445f033ed5cd63a907eb75ca6a4b011212a0584b7650a"} Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.246372 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.247869 4708 scope.go:117] "RemoveContainer" containerID="ff32f41d589b3510c77a1e0b24957c36d285c8497a8287c361be67df1b90dc23" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.248756 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.249549 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.250366 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.251082 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.278116 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.278688 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.279438 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:45 crc kubenswrapper[4708]: I0227 16:58:45.279922 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.114101 4708 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.114999 4708 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.115577 4708 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.116537 4708 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.117074 4708 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:49 crc kubenswrapper[4708]: I0227 16:58:49.117125 4708 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.117530 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="200ms" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.318263 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="400ms" Feb 27 16:58:49 crc kubenswrapper[4708]: E0227 16:58:49.719484 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="800ms" Feb 27 16:58:50 crc kubenswrapper[4708]: E0227 16:58:50.521733 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="1.6s" Feb 27 16:58:52 crc kubenswrapper[4708]: E0227 16:58:52.123054 4708 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.182:6443: connect: connection refused" interval="3.2s" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.227718 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.232881 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.233347 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.233956 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.234479 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.235052 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.235555 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.236165 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.236651 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.247258 4708 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.247288 4708 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:58:52 crc kubenswrapper[4708]: E0227 16:58:52.247682 4708 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.248440 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:52 crc kubenswrapper[4708]: E0227 16:58:52.296574 4708 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.182:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.189828fce95a2168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-02-27 16:58:40.082157928 +0000 UTC m=+318.597955545,LastTimestamp:2026-02-27 16:58:40.082157928 +0000 UTC m=+318.597955545,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.302394 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.303118 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.303171 4708 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd" exitCode=1 Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.303237 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd"} Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.303713 4708 scope.go:117] "RemoveContainer" containerID="38aaae2bb69803f7656ed6b64c84b97f9cb9b0e510c64da75fa7d0ee7397dabd" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.304198 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.304889 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.305389 4708 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.305802 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.306322 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"52a7eaa330324262005f6f6767288feaef79a2b4293d34f2f52951d0c3e5b50b"} Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.307515 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:52 crc kubenswrapper[4708]: I0227 16:58:52.537257 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.317165 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/cluster-policy-controller/0.log" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.317761 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.317887 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dbd9c09fee2dd725848185ba4188fa4105d4ce8df280acae2a091b477f217a29"} Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.319144 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.319680 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.320102 4708 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="f03358fb1b8fa968ae36adb83a8934daf5131968e284c21228b02772626aa86c" exitCode=0 Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.320138 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"f03358fb1b8fa968ae36adb83a8934daf5131968e284c21228b02772626aa86c"} Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.320206 4708 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.320443 4708 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.320462 4708 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.320698 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: E0227 16:58:53.321060 4708 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.321245 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.321754 4708 status_manager.go:851] "Failed to get status for pod" podUID="ccc8b136-e4a3-4c45-b79b-f8e2ef931b32" pod="openshift-controller-manager/controller-manager-57b9d8c589-g6rzm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-57b9d8c589-g6rzm\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.322298 4708 status_manager.go:851] "Failed to get status for pod" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.322740 4708 status_manager.go:851] "Failed to get status for pod" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" pod="openshift-authentication/oauth-openshift-558db77b4-55dsj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-55dsj\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.323108 4708 status_manager.go:851] "Failed to get status for pod" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:53 crc kubenswrapper[4708]: I0227 16:58:53.323577 4708 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.182:6443: connect: connection refused" Feb 27 16:58:54 crc kubenswrapper[4708]: I0227 16:58:54.332705 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fe25aa1c8afcae8e468f8f6cd0d5efbc49549f54f09c7d4386e5cf783cb3bf2f"} Feb 27 16:58:54 crc kubenswrapper[4708]: I0227 16:58:54.333074 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6e63f145b9ea57dae4b11fcb874866c6517e4e0f318c3b578a34544c2eebc0f8"} Feb 27 16:58:54 crc kubenswrapper[4708]: I0227 16:58:54.333096 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1b6eca3e9ef1fd17bdaed3ac73fe75e53e843663d7162e218294eda45201b1a6"} Feb 27 16:58:55 crc kubenswrapper[4708]: I0227 16:58:55.342099 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"1f9c6cc98a6114aaea04ab660e555791e704e6697db984ad87acde55e36170bc"} Feb 27 16:58:55 crc kubenswrapper[4708]: I0227 16:58:55.342434 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:55 crc kubenswrapper[4708]: I0227 16:58:55.342445 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"fc1cc0378fff9bfd38e92bbed5bba0ba3fd15b8febbdddd88740547553fab3ae"} Feb 27 16:58:55 crc kubenswrapper[4708]: I0227 16:58:55.342563 4708 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:58:55 crc kubenswrapper[4708]: I0227 16:58:55.342586 4708 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:58:57 crc kubenswrapper[4708]: I0227 16:58:57.249402 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:57 crc kubenswrapper[4708]: I0227 16:58:57.249820 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:57 crc kubenswrapper[4708]: I0227 16:58:57.260260 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:58:57 crc kubenswrapper[4708]: I0227 16:58:57.341571 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:58:57 crc kubenswrapper[4708]: I0227 16:58:57.350445 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:58:57 crc kubenswrapper[4708]: I0227 16:58:57.357605 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:59:00 crc kubenswrapper[4708]: I0227 16:59:00.362297 4708 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:59:01 crc kubenswrapper[4708]: I0227 16:59:01.384060 4708 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:59:01 crc kubenswrapper[4708]: I0227 16:59:01.384117 4708 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:59:01 crc kubenswrapper[4708]: I0227 16:59:01.392579 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:59:02 crc kubenswrapper[4708]: I0227 16:59:02.241274 4708 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="13c651be-bb2d-4c3e-be7a-9f1bbacc0324" Feb 27 16:59:02 crc kubenswrapper[4708]: I0227 16:59:02.388631 4708 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:59:02 crc kubenswrapper[4708]: I0227 16:59:02.389138 4708 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:59:02 crc kubenswrapper[4708]: I0227 16:59:02.391143 4708 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="13c651be-bb2d-4c3e-be7a-9f1bbacc0324" Feb 27 16:59:02 crc kubenswrapper[4708]: I0227 16:59:02.541932 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Feb 27 16:59:10 crc kubenswrapper[4708]: I0227 16:59:10.218882 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Feb 27 16:59:10 crc kubenswrapper[4708]: I0227 16:59:10.744042 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Feb 27 16:59:11 crc kubenswrapper[4708]: I0227 16:59:11.130567 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Feb 27 16:59:11 crc kubenswrapper[4708]: I0227 16:59:11.340211 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Feb 27 16:59:11 crc kubenswrapper[4708]: I0227 16:59:11.378169 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Feb 27 16:59:11 crc kubenswrapper[4708]: I0227 16:59:11.561805 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 16:59:12 crc kubenswrapper[4708]: I0227 16:59:12.424208 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Feb 27 16:59:12 crc kubenswrapper[4708]: I0227 16:59:12.485048 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Feb 27 16:59:12 crc kubenswrapper[4708]: I0227 16:59:12.548518 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Feb 27 16:59:12 crc kubenswrapper[4708]: I0227 16:59:12.632127 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Feb 27 16:59:12 crc kubenswrapper[4708]: I0227 16:59:12.821773 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Feb 27 16:59:12 crc kubenswrapper[4708]: I0227 16:59:12.955513 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Feb 27 16:59:12 crc kubenswrapper[4708]: I0227 16:59:12.965732 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.160300 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.208432 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.209167 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.357261 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.502305 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.615671 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.673382 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.699033 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.768601 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.816118 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.846777 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.847138 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.851587 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.867096 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.887293 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.950934 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Feb 27 16:59:13 crc kubenswrapper[4708]: I0227 16:59:13.998000 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.025452 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.213722 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.238389 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.293008 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.424058 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.448489 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.492983 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.641723 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.681275 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.758560 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.864302 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.881327 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.923381 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.952123 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.973211 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Feb 27 16:59:14 crc kubenswrapper[4708]: I0227 16:59:14.984980 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.055216 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.100988 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.180277 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.236053 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.274261 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.478401 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.488611 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.529061 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.643367 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.667750 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.717332 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Feb 27 16:59:15 crc kubenswrapper[4708]: I0227 16:59:15.913161 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.029332 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.034248 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.068346 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.139544 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.172803 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.180761 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.223745 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.228829 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.264878 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.302835 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.311035 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.370496 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.419778 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.478635 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.512722 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.593037 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.715188 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.787360 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.793349 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.810245 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.872799 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.897877 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.926613 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Feb 27 16:59:16 crc kubenswrapper[4708]: I0227 16:59:16.948838 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.024212 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.042818 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.062105 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.131440 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.163187 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.260425 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.342712 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.507226 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.529590 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.535515 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.557208 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.559519 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.575750 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.580738 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.599698 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.729298 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.792274 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.805548 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Feb 27 16:59:17 crc kubenswrapper[4708]: I0227 16:59:17.806373 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.006458 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.143839 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.145330 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.185925 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.335266 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.538815 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.564438 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.565921 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.664503 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.969195 4708 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Feb 27 16:59:18 crc kubenswrapper[4708]: I0227 16:59:18.990180 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.024483 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.063121 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.108502 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.110727 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.135836 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.189939 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.304096 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.304298 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.305100 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.312399 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.385356 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.441722 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.476629 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.709399 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.756661 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.978809 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Feb 27 16:59:19 crc kubenswrapper[4708]: I0227 16:59:19.992857 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.055506 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.129267 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.153221 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.352173 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.450371 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.469502 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.613479 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.621443 4708 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.624979 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.690996 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.710979 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.815025 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.835324 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.836508 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.855547 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.874948 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.898458 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Feb 27 16:59:20 crc kubenswrapper[4708]: I0227 16:59:20.948120 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.069238 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.170052 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.264068 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.278481 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.282529 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.295893 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.330387 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.330435 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.356237 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.423383 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.444030 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.496521 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.529821 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.570434 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.598782 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.608314 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.661065 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.676775 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.697540 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.828261 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.831759 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Feb 27 16:59:21 crc kubenswrapper[4708]: I0227 16:59:21.936199 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.024181 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.088111 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.188320 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.258669 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.263519 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.296194 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.392451 4708 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.442505 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.448764 4708 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.473580 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.712572 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.730520 4708 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.733189 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podStartSLOduration=43.73316438 podStartE2EDuration="43.73316438s" podCreationTimestamp="2026-02-27 16:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:59:00.546988331 +0000 UTC m=+339.062785918" watchObservedRunningTime="2026-02-27 16:59:22.73316438 +0000 UTC m=+361.248961997" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.738462 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-55dsj"] Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.738540 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9"] Feb 27 16:59:22 crc kubenswrapper[4708]: E0227 16:59:22.738810 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" containerName="oauth-openshift" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.738831 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" containerName="oauth-openshift" Feb 27 16:59:22 crc kubenswrapper[4708]: E0227 16:59:22.738940 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" containerName="installer" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.738955 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" containerName="installer" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.739148 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b2eb3a2-5689-482d-82a4-b8ec5edf2418" containerName="installer" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.739167 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" containerName="oauth-openshift" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.739345 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.739532 4708 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.739617 4708 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="37500f59-8db5-4c44-b24c-5abacbddf26b" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.739675 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j29cw"] Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.739880 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.740051 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-j29cw" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="registry-server" containerID="cri-o://7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab" gracePeriod=2 Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.746119 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.746531 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.746685 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.746758 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.746702 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.747262 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.748059 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.777948 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.778310 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.778292082 podStartE2EDuration="22.778292082s" podCreationTimestamp="2026-02-27 16:59:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:59:22.777406357 +0000 UTC m=+361.293203974" watchObservedRunningTime="2026-02-27 16:59:22.778292082 +0000 UTC m=+361.294089669" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.858510 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.869243 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-client-ca\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.869361 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kn56d\" (UniqueName: \"kubernetes.io/projected/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-kube-api-access-kn56d\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.869482 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-serving-cert\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.869594 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-config\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.926316 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.970790 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-client-ca\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.970897 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kn56d\" (UniqueName: \"kubernetes.io/projected/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-kube-api-access-kn56d\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.970985 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-serving-cert\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.971042 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-config\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.972788 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-client-ca\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.973309 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-config\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.989253 4708 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.989623 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://50ccc25fb701392fba2b6b461b90820ec8b4c74f3fe16296687dbf20847b1812" gracePeriod=5 Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.990458 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Feb 27 16:59:22 crc kubenswrapper[4708]: I0227 16:59:22.991040 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-serving-cert\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.007428 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kn56d\" (UniqueName: \"kubernetes.io/projected/d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa-kube-api-access-kn56d\") pod \"route-controller-manager-7c4bdc96ff-4s2b9\" (UID: \"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa\") " pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.072229 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.083632 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.239126 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.275342 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-utilities\") pod \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.275428 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5jjf\" (UniqueName: \"kubernetes.io/projected/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-kube-api-access-p5jjf\") pod \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.277560 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-utilities" (OuterVolumeSpecName: "utilities") pod "73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" (UID: "73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.282014 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-kube-api-access-p5jjf" (OuterVolumeSpecName: "kube-api-access-p5jjf") pod "73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" (UID: "73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db"). InnerVolumeSpecName "kube-api-access-p5jjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287055 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-d9df679ff-twzzv"] Feb 27 16:59:23 crc kubenswrapper[4708]: E0227 16:59:23.287281 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287296 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 16:59:23 crc kubenswrapper[4708]: E0227 16:59:23.287309 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="registry-server" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287318 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="registry-server" Feb 27 16:59:23 crc kubenswrapper[4708]: E0227 16:59:23.287330 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="extract-utilities" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287339 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="extract-utilities" Feb 27 16:59:23 crc kubenswrapper[4708]: E0227 16:59:23.287351 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="extract-content" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287360 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="extract-content" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287478 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287489 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerName="registry-server" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.287915 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.293262 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.297106 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.297991 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.298626 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d9df679ff-twzzv"] Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.298801 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.304200 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.304447 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.304658 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.304780 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.304888 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.304961 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.305038 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.305117 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.307391 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.311193 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.321387 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.321393 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.376180 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-catalog-content\") pod \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\" (UID: \"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db\") " Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.376862 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-audit-policies\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.376943 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377005 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8pdv\" (UniqueName: \"kubernetes.io/projected/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-kube-api-access-h8pdv\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377049 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-session\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377084 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377221 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-audit-dir\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377281 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377354 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377448 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-login\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377487 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377527 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377608 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-error\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377650 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377681 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377746 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.377771 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5jjf\" (UniqueName: \"kubernetes.io/projected/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-kube-api-access-p5jjf\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.390624 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.399785 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.425783 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478441 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-error\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478505 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478544 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478588 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-audit-policies\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478625 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478678 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h8pdv\" (UniqueName: \"kubernetes.io/projected/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-kube-api-access-h8pdv\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478714 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-session\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478748 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478793 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-audit-dir\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478823 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478897 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478949 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-login\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.478981 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.479013 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.479184 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-audit-dir\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.479647 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-audit-policies\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.479794 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-service-ca\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.480035 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-cliconfig\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.481548 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.483699 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.484541 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-router-certs\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.484704 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-error\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.485185 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-login\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.485831 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-session\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.487330 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.487685 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" (UID: "73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.490388 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.490745 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-v4-0-config-system-serving-cert\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.501347 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h8pdv\" (UniqueName: \"kubernetes.io/projected/d671fe4f-55f2-4686-8b81-f0ce92c9c32a-kube-api-access-h8pdv\") pod \"oauth-openshift-d9df679ff-twzzv\" (UID: \"d671fe4f-55f2-4686-8b81-f0ce92c9c32a\") " pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.526650 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9"] Feb 27 16:59:23 crc kubenswrapper[4708]: W0227 16:59:23.542546 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd23f0e08_66ac_4f2b_a0e8_25eef5fbc5fa.slice/crio-6372ea7eede7f96fed9c1bcb853b2deefc93c9cf19f3056c83e1ef7ce002bb63 WatchSource:0}: Error finding container 6372ea7eede7f96fed9c1bcb853b2deefc93c9cf19f3056c83e1ef7ce002bb63: Status 404 returned error can't find the container with id 6372ea7eede7f96fed9c1bcb853b2deefc93c9cf19f3056c83e1ef7ce002bb63 Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.549680 4708 generic.go:334] "Generic (PLEG): container finished" podID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" containerID="7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab" exitCode=0 Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.549829 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-j29cw" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.550014 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j29cw" event={"ID":"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db","Type":"ContainerDied","Data":"7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab"} Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.550062 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-j29cw" event={"ID":"73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db","Type":"ContainerDied","Data":"9376441537c2a3cb6e7e5ae47a749215f4af3e48a2782d36e0fbc14fcdbe5d18"} Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.550094 4708 scope.go:117] "RemoveContainer" containerID="7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.564390 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.574715 4708 scope.go:117] "RemoveContainer" containerID="8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.579916 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.587608 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.591167 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-j29cw"] Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.597690 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-j29cw"] Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.614078 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.618979 4708 scope.go:117] "RemoveContainer" containerID="022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.648242 4708 scope.go:117] "RemoveContainer" containerID="7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab" Feb 27 16:59:23 crc kubenswrapper[4708]: E0227 16:59:23.648781 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab\": container with ID starting with 7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab not found: ID does not exist" containerID="7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.648834 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab"} err="failed to get container status \"7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab\": rpc error: code = NotFound desc = could not find container \"7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab\": container with ID starting with 7afa3b2f9f578c5381fd0f454b938e72f4b6cdfe3442b722d2d807683febb5ab not found: ID does not exist" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.648935 4708 scope.go:117] "RemoveContainer" containerID="8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676" Feb 27 16:59:23 crc kubenswrapper[4708]: E0227 16:59:23.649250 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676\": container with ID starting with 8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676 not found: ID does not exist" containerID="8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.649284 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676"} err="failed to get container status \"8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676\": rpc error: code = NotFound desc = could not find container \"8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676\": container with ID starting with 8a91b78f2c5049316d882e26055972524f6d4af227117da670880771ce3fd676 not found: ID does not exist" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.649310 4708 scope.go:117] "RemoveContainer" containerID="022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc" Feb 27 16:59:23 crc kubenswrapper[4708]: E0227 16:59:23.649650 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc\": container with ID starting with 022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc not found: ID does not exist" containerID="022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.649690 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc"} err="failed to get container status \"022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc\": rpc error: code = NotFound desc = could not find container \"022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc\": container with ID starting with 022d4cb88f294543c11716a99c427f3b6154ce07c0280778163d1981d8be1bdc not found: ID does not exist" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.743475 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.795525 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.809662 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.893477 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Feb 27 16:59:23 crc kubenswrapper[4708]: I0227 16:59:23.899883 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.023154 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.153551 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-d9df679ff-twzzv"] Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.179000 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.236563 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3de1e003-2dee-4d76-86cd-cd60680535bd" path="/var/lib/kubelet/pods/3de1e003-2dee-4d76-86cd-cd60680535bd/volumes" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.240917 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db" path="/var/lib/kubelet/pods/73c1fe4a-7d71-456f-b7e5-3e4ff5b4f6db/volumes" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.287438 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.341597 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.418738 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.463957 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.504387 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.556433 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" event={"ID":"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa","Type":"ContainerStarted","Data":"ba126cdf26825660f249f2bd3f2332bcba01c57fa02bda92a6dbcbfa783f9972"} Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.556481 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" event={"ID":"d23f0e08-66ac-4f2b-a0e8-25eef5fbc5fa","Type":"ContainerStarted","Data":"6372ea7eede7f96fed9c1bcb853b2deefc93c9cf19f3056c83e1ef7ce002bb63"} Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.556724 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.562067 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" event={"ID":"d671fe4f-55f2-4686-8b81-f0ce92c9c32a","Type":"ContainerStarted","Data":"1b9e863dba25e63a1c48631dcb64c380b3d33b0801d3986a27325d8ddbf692df"} Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.562129 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.562146 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" event={"ID":"d671fe4f-55f2-4686-8b81-f0ce92c9c32a","Type":"ContainerStarted","Data":"5359afe909851c22ee252adf1850e4ba8a5b7e86ebc86f68d44d8f3e38b7eb98"} Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.563525 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.567892 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.584400 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7c4bdc96ff-4s2b9" podStartSLOduration=49.584382924 podStartE2EDuration="49.584382924s" podCreationTimestamp="2026-02-27 16:58:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:59:24.581098018 +0000 UTC m=+363.096895605" watchObservedRunningTime="2026-02-27 16:59:24.584382924 +0000 UTC m=+363.100180521" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.633243 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.647732 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.867084 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.890953 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.896773 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Feb 27 16:59:24 crc kubenswrapper[4708]: I0227 16:59:24.913207 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.093305 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.120224 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.151141 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-d9df679ff-twzzv" podStartSLOduration=67.151122658 podStartE2EDuration="1m7.151122658s" podCreationTimestamp="2026-02-27 16:58:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 16:59:24.645653116 +0000 UTC m=+363.161450713" watchObservedRunningTime="2026-02-27 16:59:25.151122658 +0000 UTC m=+363.666920245" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.179465 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.235743 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.334836 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.358033 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.815710 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.825208 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Feb 27 16:59:25 crc kubenswrapper[4708]: I0227 16:59:25.935585 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Feb 27 16:59:26 crc kubenswrapper[4708]: I0227 16:59:26.158972 4708 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Feb 27 16:59:26 crc kubenswrapper[4708]: I0227 16:59:26.179361 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Feb 27 16:59:26 crc kubenswrapper[4708]: I0227 16:59:26.186550 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Feb 27 16:59:26 crc kubenswrapper[4708]: I0227 16:59:26.530007 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Feb 27 16:59:26 crc kubenswrapper[4708]: I0227 16:59:26.552137 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Feb 27 16:59:26 crc kubenswrapper[4708]: I0227 16:59:26.776202 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Feb 27 16:59:26 crc kubenswrapper[4708]: I0227 16:59:26.963722 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Feb 27 16:59:27 crc kubenswrapper[4708]: I0227 16:59:27.492270 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Feb 27 16:59:27 crc kubenswrapper[4708]: I0227 16:59:27.564606 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Feb 27 16:59:27 crc kubenswrapper[4708]: I0227 16:59:27.579326 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Feb 27 16:59:27 crc kubenswrapper[4708]: I0227 16:59:27.762685 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.587888 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.587986 4708 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="50ccc25fb701392fba2b6b461b90820ec8b4c74f3fe16296687dbf20847b1812" exitCode=137 Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.588049 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e070920d0bf93fdc41bf1603e354a4768303c8b341c07a29fb947620c58ebf2" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.597756 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.597889 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.653769 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.654165 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.654355 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.654543 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.654742 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.655310 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.655510 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.655674 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.656973 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.669178 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.756720 4708 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.756779 4708 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.756800 4708 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.756823 4708 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:28 crc kubenswrapper[4708]: I0227 16:59:28.756871 4708 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Feb 27 16:59:29 crc kubenswrapper[4708]: I0227 16:59:29.595072 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Feb 27 16:59:30 crc kubenswrapper[4708]: I0227 16:59:30.240174 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Feb 27 16:59:30 crc kubenswrapper[4708]: I0227 16:59:30.241060 4708 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="" Feb 27 16:59:30 crc kubenswrapper[4708]: I0227 16:59:30.260242 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:59:30 crc kubenswrapper[4708]: I0227 16:59:30.260295 4708 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d5168d6-9be5-4565-aef2-55692a7fc1d3" Feb 27 16:59:30 crc kubenswrapper[4708]: I0227 16:59:30.266681 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Feb 27 16:59:30 crc kubenswrapper[4708]: I0227 16:59:30.266767 4708 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" mirrorPodUID="6d5168d6-9be5-4565-aef2-55692a7fc1d3" Feb 27 16:59:45 crc kubenswrapper[4708]: I0227 16:59:45.697592 4708 generic.go:334] "Generic (PLEG): container finished" podID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerID="350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159" exitCode=0 Feb 27 16:59:45 crc kubenswrapper[4708]: I0227 16:59:45.697643 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" event={"ID":"84260b20-4df9-4dea-9524-bd9c18ef7074","Type":"ContainerDied","Data":"350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159"} Feb 27 16:59:45 crc kubenswrapper[4708]: I0227 16:59:45.698395 4708 scope.go:117] "RemoveContainer" containerID="350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159" Feb 27 16:59:46 crc kubenswrapper[4708]: I0227 16:59:46.706565 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" event={"ID":"84260b20-4df9-4dea-9524-bd9c18ef7074","Type":"ContainerStarted","Data":"c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3"} Feb 27 16:59:46 crc kubenswrapper[4708]: I0227 16:59:46.707905 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 16:59:46 crc kubenswrapper[4708]: I0227 16:59:46.709623 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.183367 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536860-fnz5n"] Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.185518 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-fnz5n" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.188844 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.189448 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.189738 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.196352 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww"] Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.197524 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.199466 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.200410 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.205284 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-fnz5n"] Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.212326 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww"] Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.314403 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llmzj\" (UniqueName: \"kubernetes.io/projected/2e099232-71ed-4051-9c36-077664c3cd78-kube-api-access-llmzj\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.314522 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e099232-71ed-4051-9c36-077664c3cd78-config-volume\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.315079 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e099232-71ed-4051-9c36-077664c3cd78-secret-volume\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.315157 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gwnw\" (UniqueName: \"kubernetes.io/projected/7424251f-c1f8-48a8-8de9-51b1519ccb44-kube-api-access-2gwnw\") pod \"auto-csr-approver-29536860-fnz5n\" (UID: \"7424251f-c1f8-48a8-8de9-51b1519ccb44\") " pod="openshift-infra/auto-csr-approver-29536860-fnz5n" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.416891 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e099232-71ed-4051-9c36-077664c3cd78-config-volume\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.417052 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e099232-71ed-4051-9c36-077664c3cd78-secret-volume\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.417089 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gwnw\" (UniqueName: \"kubernetes.io/projected/7424251f-c1f8-48a8-8de9-51b1519ccb44-kube-api-access-2gwnw\") pod \"auto-csr-approver-29536860-fnz5n\" (UID: \"7424251f-c1f8-48a8-8de9-51b1519ccb44\") " pod="openshift-infra/auto-csr-approver-29536860-fnz5n" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.417168 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llmzj\" (UniqueName: \"kubernetes.io/projected/2e099232-71ed-4051-9c36-077664c3cd78-kube-api-access-llmzj\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.419044 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e099232-71ed-4051-9c36-077664c3cd78-config-volume\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.425436 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e099232-71ed-4051-9c36-077664c3cd78-secret-volume\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.446317 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gwnw\" (UniqueName: \"kubernetes.io/projected/7424251f-c1f8-48a8-8de9-51b1519ccb44-kube-api-access-2gwnw\") pod \"auto-csr-approver-29536860-fnz5n\" (UID: \"7424251f-c1f8-48a8-8de9-51b1519ccb44\") " pod="openshift-infra/auto-csr-approver-29536860-fnz5n" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.446831 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llmzj\" (UniqueName: \"kubernetes.io/projected/2e099232-71ed-4051-9c36-077664c3cd78-kube-api-access-llmzj\") pod \"collect-profiles-29536860-2z4ww\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.522654 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-fnz5n" Feb 27 17:00:00 crc kubenswrapper[4708]: I0227 17:00:00.538119 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:01 crc kubenswrapper[4708]: W0227 17:00:01.058821 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7424251f_c1f8_48a8_8de9_51b1519ccb44.slice/crio-7a64521a8d2e15655128315978fa52881393d6cb9b60ccae5e6bf90b1e11c56c WatchSource:0}: Error finding container 7a64521a8d2e15655128315978fa52881393d6cb9b60ccae5e6bf90b1e11c56c: Status 404 returned error can't find the container with id 7a64521a8d2e15655128315978fa52881393d6cb9b60ccae5e6bf90b1e11c56c Feb 27 17:00:01 crc kubenswrapper[4708]: I0227 17:00:01.061149 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-fnz5n"] Feb 27 17:00:01 crc kubenswrapper[4708]: I0227 17:00:01.127161 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww"] Feb 27 17:00:01 crc kubenswrapper[4708]: I0227 17:00:01.810392 4708 generic.go:334] "Generic (PLEG): container finished" podID="2e099232-71ed-4051-9c36-077664c3cd78" containerID="7c068807f0fd39e0a24b085993460f73a7625628b33ec26364f4756cb56391d5" exitCode=0 Feb 27 17:00:01 crc kubenswrapper[4708]: I0227 17:00:01.810540 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" event={"ID":"2e099232-71ed-4051-9c36-077664c3cd78","Type":"ContainerDied","Data":"7c068807f0fd39e0a24b085993460f73a7625628b33ec26364f4756cb56391d5"} Feb 27 17:00:01 crc kubenswrapper[4708]: I0227 17:00:01.810616 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" event={"ID":"2e099232-71ed-4051-9c36-077664c3cd78","Type":"ContainerStarted","Data":"54b7096c71f25692eb655b1643d9a0c3007b53199a8e2c420382f965896ae6ad"} Feb 27 17:00:01 crc kubenswrapper[4708]: I0227 17:00:01.811981 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536860-fnz5n" event={"ID":"7424251f-c1f8-48a8-8de9-51b1519ccb44","Type":"ContainerStarted","Data":"7a64521a8d2e15655128315978fa52881393d6cb9b60ccae5e6bf90b1e11c56c"} Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.257705 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.265511 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llmzj\" (UniqueName: \"kubernetes.io/projected/2e099232-71ed-4051-9c36-077664c3cd78-kube-api-access-llmzj\") pod \"2e099232-71ed-4051-9c36-077664c3cd78\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.265550 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e099232-71ed-4051-9c36-077664c3cd78-secret-volume\") pod \"2e099232-71ed-4051-9c36-077664c3cd78\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.265593 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e099232-71ed-4051-9c36-077664c3cd78-config-volume\") pod \"2e099232-71ed-4051-9c36-077664c3cd78\" (UID: \"2e099232-71ed-4051-9c36-077664c3cd78\") " Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.266354 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e099232-71ed-4051-9c36-077664c3cd78-config-volume" (OuterVolumeSpecName: "config-volume") pod "2e099232-71ed-4051-9c36-077664c3cd78" (UID: "2e099232-71ed-4051-9c36-077664c3cd78"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.271216 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e099232-71ed-4051-9c36-077664c3cd78-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2e099232-71ed-4051-9c36-077664c3cd78" (UID: "2e099232-71ed-4051-9c36-077664c3cd78"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.272617 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e099232-71ed-4051-9c36-077664c3cd78-kube-api-access-llmzj" (OuterVolumeSpecName: "kube-api-access-llmzj") pod "2e099232-71ed-4051-9c36-077664c3cd78" (UID: "2e099232-71ed-4051-9c36-077664c3cd78"). InnerVolumeSpecName "kube-api-access-llmzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.367003 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llmzj\" (UniqueName: \"kubernetes.io/projected/2e099232-71ed-4051-9c36-077664c3cd78-kube-api-access-llmzj\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.367052 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2e099232-71ed-4051-9c36-077664c3cd78-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.367071 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2e099232-71ed-4051-9c36-077664c3cd78-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.828317 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" event={"ID":"2e099232-71ed-4051-9c36-077664c3cd78","Type":"ContainerDied","Data":"54b7096c71f25692eb655b1643d9a0c3007b53199a8e2c420382f965896ae6ad"} Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.828753 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54b7096c71f25692eb655b1643d9a0c3007b53199a8e2c420382f965896ae6ad" Feb 27 17:00:03 crc kubenswrapper[4708]: I0227 17:00:03.828429 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww" Feb 27 17:00:05 crc kubenswrapper[4708]: I0227 17:00:05.631638 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:00:05 crc kubenswrapper[4708]: I0227 17:00:05.631726 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:00:10 crc kubenswrapper[4708]: I0227 17:00:10.883536 4708 generic.go:334] "Generic (PLEG): container finished" podID="7424251f-c1f8-48a8-8de9-51b1519ccb44" containerID="3fb7c56ad736d08f51881cbad04dd8f518cccf8fdb5151b3f1168adcad35b4d3" exitCode=0 Feb 27 17:00:10 crc kubenswrapper[4708]: I0227 17:00:10.883639 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536860-fnz5n" event={"ID":"7424251f-c1f8-48a8-8de9-51b1519ccb44","Type":"ContainerDied","Data":"3fb7c56ad736d08f51881cbad04dd8f518cccf8fdb5151b3f1168adcad35b4d3"} Feb 27 17:00:12 crc kubenswrapper[4708]: I0227 17:00:12.341500 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-fnz5n" Feb 27 17:00:12 crc kubenswrapper[4708]: I0227 17:00:12.510146 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gwnw\" (UniqueName: \"kubernetes.io/projected/7424251f-c1f8-48a8-8de9-51b1519ccb44-kube-api-access-2gwnw\") pod \"7424251f-c1f8-48a8-8de9-51b1519ccb44\" (UID: \"7424251f-c1f8-48a8-8de9-51b1519ccb44\") " Feb 27 17:00:12 crc kubenswrapper[4708]: I0227 17:00:12.515076 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7424251f-c1f8-48a8-8de9-51b1519ccb44-kube-api-access-2gwnw" (OuterVolumeSpecName: "kube-api-access-2gwnw") pod "7424251f-c1f8-48a8-8de9-51b1519ccb44" (UID: "7424251f-c1f8-48a8-8de9-51b1519ccb44"). InnerVolumeSpecName "kube-api-access-2gwnw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:12 crc kubenswrapper[4708]: I0227 17:00:12.611711 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gwnw\" (UniqueName: \"kubernetes.io/projected/7424251f-c1f8-48a8-8de9-51b1519ccb44-kube-api-access-2gwnw\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:12 crc kubenswrapper[4708]: I0227 17:00:12.905122 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536860-fnz5n" event={"ID":"7424251f-c1f8-48a8-8de9-51b1519ccb44","Type":"ContainerDied","Data":"7a64521a8d2e15655128315978fa52881393d6cb9b60ccae5e6bf90b1e11c56c"} Feb 27 17:00:12 crc kubenswrapper[4708]: I0227 17:00:12.905179 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a64521a8d2e15655128315978fa52881393d6cb9b60ccae5e6bf90b1e11c56c" Feb 27 17:00:12 crc kubenswrapper[4708]: I0227 17:00:12.905196 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536860-fnz5n" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.347487 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rtdw"] Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.348383 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7rtdw" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="registry-server" containerID="cri-o://e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4" gracePeriod=30 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.357085 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvqlm"] Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.357625 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zvqlm" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="registry-server" containerID="cri-o://d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59" gracePeriod=30 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.361886 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lzlm4"] Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.362101 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" containerID="cri-o://c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3" gracePeriod=30 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.380610 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5lwl"] Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.381473 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p5lwl" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="registry-server" containerID="cri-o://c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526" gracePeriod=30 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.385522 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmzsx"] Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.385734 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lmzsx" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="registry-server" containerID="cri-o://a1c2669b0f45732a8d1f0bafb53b7294fa0c3e0072e535cc2721904b5fc7b17e" gracePeriod=30 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.410944 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvdjp"] Feb 27 17:00:19 crc kubenswrapper[4708]: E0227 17:00:19.411253 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7424251f-c1f8-48a8-8de9-51b1519ccb44" containerName="oc" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.411267 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7424251f-c1f8-48a8-8de9-51b1519ccb44" containerName="oc" Feb 27 17:00:19 crc kubenswrapper[4708]: E0227 17:00:19.411295 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e099232-71ed-4051-9c36-077664c3cd78" containerName="collect-profiles" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.411311 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e099232-71ed-4051-9c36-077664c3cd78" containerName="collect-profiles" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.411439 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e099232-71ed-4051-9c36-077664c3cd78" containerName="collect-profiles" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.411456 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7424251f-c1f8-48a8-8de9-51b1519ccb44" containerName="oc" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.411960 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.414858 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvdjp"] Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.517456 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0004cd70-bc98-40ac-b46e-54e84ba076d5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.517592 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68n7h\" (UniqueName: \"kubernetes.io/projected/0004cd70-bc98-40ac-b46e-54e84ba076d5-kube-api-access-68n7h\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.517688 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0004cd70-bc98-40ac-b46e-54e84ba076d5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.621314 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0004cd70-bc98-40ac-b46e-54e84ba076d5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.621668 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0004cd70-bc98-40ac-b46e-54e84ba076d5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.621713 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-68n7h\" (UniqueName: \"kubernetes.io/projected/0004cd70-bc98-40ac-b46e-54e84ba076d5-kube-api-access-68n7h\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.622751 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0004cd70-bc98-40ac-b46e-54e84ba076d5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.627841 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/0004cd70-bc98-40ac-b46e-54e84ba076d5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.640712 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-68n7h\" (UniqueName: \"kubernetes.io/projected/0004cd70-bc98-40ac-b46e-54e84ba076d5-kube-api-access-68n7h\") pod \"marketplace-operator-79b997595-wvdjp\" (UID: \"0004cd70-bc98-40ac-b46e-54e84ba076d5\") " pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.834426 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.845268 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.890000 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.963306 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.964988 4708 generic.go:334] "Generic (PLEG): container finished" podID="96160365-88cf-419c-a2d2-04818cde5016" containerID="a1c2669b0f45732a8d1f0bafb53b7294fa0c3e0072e535cc2721904b5fc7b17e" exitCode=0 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.965036 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmzsx" event={"ID":"96160365-88cf-419c-a2d2-04818cde5016","Type":"ContainerDied","Data":"a1c2669b0f45732a8d1f0bafb53b7294fa0c3e0072e535cc2721904b5fc7b17e"} Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.966537 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.968293 4708 generic.go:334] "Generic (PLEG): container finished" podID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerID="d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59" exitCode=0 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.968356 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvqlm" event={"ID":"5710135c-fd59-4ff6-b74a-ad7ab8730aff","Type":"ContainerDied","Data":"d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59"} Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.968383 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zvqlm" event={"ID":"5710135c-fd59-4ff6-b74a-ad7ab8730aff","Type":"ContainerDied","Data":"d83026fe88ba75305ca27a5ee02a909966a26a0637b8e06ab20769a0517bc190"} Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.968400 4708 scope.go:117] "RemoveContainer" containerID="d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.968527 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zvqlm" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.981263 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.981428 4708 generic.go:334] "Generic (PLEG): container finished" podID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerID="c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3" exitCode=0 Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.981471 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" event={"ID":"84260b20-4df9-4dea-9524-bd9c18ef7074","Type":"ContainerDied","Data":"c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3"} Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.981494 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" event={"ID":"84260b20-4df9-4dea-9524-bd9c18ef7074","Type":"ContainerDied","Data":"6e06c4820271a6847317bff3eb94cbb8786261f0368611b49f944cec1b28746b"} Feb 27 17:00:19 crc kubenswrapper[4708]: I0227 17:00:19.981527 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.017486 4708 scope.go:117] "RemoveContainer" containerID="4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.021258 4708 generic.go:334] "Generic (PLEG): container finished" podID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerID="e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4" exitCode=0 Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.021316 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rtdw" event={"ID":"9b733486-f273-4bd5-afa3-d35d3d1feafc","Type":"ContainerDied","Data":"e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4"} Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.021339 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7rtdw" event={"ID":"9b733486-f273-4bd5-afa3-d35d3d1feafc","Type":"ContainerDied","Data":"3ada598e5866979706ca456a772dd8aa0362eb5a71d8d9b3fdbb646fd59c7aed"} Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.021416 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7rtdw" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.024595 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlrjp\" (UniqueName: \"kubernetes.io/projected/84260b20-4df9-4dea-9524-bd9c18ef7074-kube-api-access-zlrjp\") pod \"84260b20-4df9-4dea-9524-bd9c18ef7074\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.024652 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca\") pod \"84260b20-4df9-4dea-9524-bd9c18ef7074\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.024696 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-utilities\") pod \"9b733486-f273-4bd5-afa3-d35d3d1feafc\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.024719 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-586qv\" (UniqueName: \"kubernetes.io/projected/9b733486-f273-4bd5-afa3-d35d3d1feafc-kube-api-access-586qv\") pod \"9b733486-f273-4bd5-afa3-d35d3d1feafc\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.024759 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics\") pod \"84260b20-4df9-4dea-9524-bd9c18ef7074\" (UID: \"84260b20-4df9-4dea-9524-bd9c18ef7074\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.024780 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-catalog-content\") pod \"9b733486-f273-4bd5-afa3-d35d3d1feafc\" (UID: \"9b733486-f273-4bd5-afa3-d35d3d1feafc\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.028524 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "84260b20-4df9-4dea-9524-bd9c18ef7074" (UID: "84260b20-4df9-4dea-9524-bd9c18ef7074"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.033141 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84260b20-4df9-4dea-9524-bd9c18ef7074-kube-api-access-zlrjp" (OuterVolumeSpecName: "kube-api-access-zlrjp") pod "84260b20-4df9-4dea-9524-bd9c18ef7074" (UID: "84260b20-4df9-4dea-9524-bd9c18ef7074"). InnerVolumeSpecName "kube-api-access-zlrjp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.034119 4708 scope.go:117] "RemoveContainer" containerID="9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.035064 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-utilities" (OuterVolumeSpecName: "utilities") pod "9b733486-f273-4bd5-afa3-d35d3d1feafc" (UID: "9b733486-f273-4bd5-afa3-d35d3d1feafc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.035802 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b733486-f273-4bd5-afa3-d35d3d1feafc-kube-api-access-586qv" (OuterVolumeSpecName: "kube-api-access-586qv") pod "9b733486-f273-4bd5-afa3-d35d3d1feafc" (UID: "9b733486-f273-4bd5-afa3-d35d3d1feafc"). InnerVolumeSpecName "kube-api-access-586qv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.036024 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "84260b20-4df9-4dea-9524-bd9c18ef7074" (UID: "84260b20-4df9-4dea-9524-bd9c18ef7074"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.037237 4708 generic.go:334] "Generic (PLEG): container finished" podID="5c38d70c-968f-44dd-b42b-013bc033debb" containerID="c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526" exitCode=0 Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.037293 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5lwl" event={"ID":"5c38d70c-968f-44dd-b42b-013bc033debb","Type":"ContainerDied","Data":"c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526"} Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.037320 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5lwl" event={"ID":"5c38d70c-968f-44dd-b42b-013bc033debb","Type":"ContainerDied","Data":"8b65fd6ba1f80c3c40ee28b6e921689ffa5e1afd03e6422ab1d750d75b886657"} Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.037406 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5lwl" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.062558 4708 scope.go:117] "RemoveContainer" containerID="d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.064816 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59\": container with ID starting with d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59 not found: ID does not exist" containerID="d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.064872 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59"} err="failed to get container status \"d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59\": rpc error: code = NotFound desc = could not find container \"d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59\": container with ID starting with d368c91eaa9abf1a1b390b0c41dad6f21e25e60b9d769677af2ea186d33edd59 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.064898 4708 scope.go:117] "RemoveContainer" containerID="4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.065660 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7\": container with ID starting with 4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7 not found: ID does not exist" containerID="4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.065676 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7"} err="failed to get container status \"4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7\": rpc error: code = NotFound desc = could not find container \"4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7\": container with ID starting with 4c01e4bf4735bdef78ba03d8ce36795a067c155981a31a1ffd050b3aa6287fb7 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.065689 4708 scope.go:117] "RemoveContainer" containerID="9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.065951 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6\": container with ID starting with 9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6 not found: ID does not exist" containerID="9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.065968 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6"} err="failed to get container status \"9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6\": rpc error: code = NotFound desc = could not find container \"9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6\": container with ID starting with 9e3b65065d5da29a790a656e44a96c146bf1e9fffd7e81c2843c3ffe4817efb6 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.065979 4708 scope.go:117] "RemoveContainer" containerID="c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.081992 4708 scope.go:117] "RemoveContainer" containerID="350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.094585 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9b733486-f273-4bd5-afa3-d35d3d1feafc" (UID: "9b733486-f273-4bd5-afa3-d35d3d1feafc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.095385 4708 scope.go:117] "RemoveContainer" containerID="c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.095745 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3\": container with ID starting with c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3 not found: ID does not exist" containerID="c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.095783 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3"} err="failed to get container status \"c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3\": rpc error: code = NotFound desc = could not find container \"c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3\": container with ID starting with c49c12946aed2dca3071583da34f12077f83b18416e5cdf8c31614c378bf2ff3 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.095816 4708 scope.go:117] "RemoveContainer" containerID="350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.096466 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159\": container with ID starting with 350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159 not found: ID does not exist" containerID="350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.096501 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159"} err="failed to get container status \"350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159\": rpc error: code = NotFound desc = could not find container \"350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159\": container with ID starting with 350a3a05f06d231e3c6fca76f2892edf006c0cc2c07baf965c6316bddd254159 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.096522 4708 scope.go:117] "RemoveContainer" containerID="e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.117564 4708 scope.go:117] "RemoveContainer" containerID="70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128425 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-utilities\") pod \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128588 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-catalog-content\") pod \"96160365-88cf-419c-a2d2-04818cde5016\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128648 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-utilities\") pod \"5c38d70c-968f-44dd-b42b-013bc033debb\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128681 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tq7h6\" (UniqueName: \"kubernetes.io/projected/5c38d70c-968f-44dd-b42b-013bc033debb-kube-api-access-tq7h6\") pod \"5c38d70c-968f-44dd-b42b-013bc033debb\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128721 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-utilities\") pod \"96160365-88cf-419c-a2d2-04818cde5016\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128757 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4czbr\" (UniqueName: \"kubernetes.io/projected/96160365-88cf-419c-a2d2-04818cde5016-kube-api-access-4czbr\") pod \"96160365-88cf-419c-a2d2-04818cde5016\" (UID: \"96160365-88cf-419c-a2d2-04818cde5016\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128840 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4g9c\" (UniqueName: \"kubernetes.io/projected/5710135c-fd59-4ff6-b74a-ad7ab8730aff-kube-api-access-q4g9c\") pod \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128880 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-catalog-content\") pod \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\" (UID: \"5710135c-fd59-4ff6-b74a-ad7ab8730aff\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.128924 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-catalog-content\") pod \"5c38d70c-968f-44dd-b42b-013bc033debb\" (UID: \"5c38d70c-968f-44dd-b42b-013bc033debb\") " Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.129305 4708 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.129327 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.129340 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-586qv\" (UniqueName: \"kubernetes.io/projected/9b733486-f273-4bd5-afa3-d35d3d1feafc-kube-api-access-586qv\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.129352 4708 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/84260b20-4df9-4dea-9524-bd9c18ef7074-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.129365 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9b733486-f273-4bd5-afa3-d35d3d1feafc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.129377 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlrjp\" (UniqueName: \"kubernetes.io/projected/84260b20-4df9-4dea-9524-bd9c18ef7074-kube-api-access-zlrjp\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.129703 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-utilities" (OuterVolumeSpecName: "utilities") pod "5710135c-fd59-4ff6-b74a-ad7ab8730aff" (UID: "5710135c-fd59-4ff6-b74a-ad7ab8730aff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.130498 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-utilities" (OuterVolumeSpecName: "utilities") pod "5c38d70c-968f-44dd-b42b-013bc033debb" (UID: "5c38d70c-968f-44dd-b42b-013bc033debb"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.131929 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-utilities" (OuterVolumeSpecName: "utilities") pod "96160365-88cf-419c-a2d2-04818cde5016" (UID: "96160365-88cf-419c-a2d2-04818cde5016"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.132267 4708 scope.go:117] "RemoveContainer" containerID="dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.132958 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c38d70c-968f-44dd-b42b-013bc033debb-kube-api-access-tq7h6" (OuterVolumeSpecName: "kube-api-access-tq7h6") pod "5c38d70c-968f-44dd-b42b-013bc033debb" (UID: "5c38d70c-968f-44dd-b42b-013bc033debb"). InnerVolumeSpecName "kube-api-access-tq7h6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.136334 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5710135c-fd59-4ff6-b74a-ad7ab8730aff-kube-api-access-q4g9c" (OuterVolumeSpecName: "kube-api-access-q4g9c") pod "5710135c-fd59-4ff6-b74a-ad7ab8730aff" (UID: "5710135c-fd59-4ff6-b74a-ad7ab8730aff"). InnerVolumeSpecName "kube-api-access-q4g9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.136664 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96160365-88cf-419c-a2d2-04818cde5016-kube-api-access-4czbr" (OuterVolumeSpecName: "kube-api-access-4czbr") pod "96160365-88cf-419c-a2d2-04818cde5016" (UID: "96160365-88cf-419c-a2d2-04818cde5016"). InnerVolumeSpecName "kube-api-access-4czbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.150058 4708 scope.go:117] "RemoveContainer" containerID="e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.151329 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4\": container with ID starting with e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4 not found: ID does not exist" containerID="e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.151368 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4"} err="failed to get container status \"e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4\": rpc error: code = NotFound desc = could not find container \"e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4\": container with ID starting with e29a98787c8b691639acd748be7e28a9e42a68c0b3e52d012a53316227dc7ef4 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.151393 4708 scope.go:117] "RemoveContainer" containerID="70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.151787 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec\": container with ID starting with 70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec not found: ID does not exist" containerID="70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.151822 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec"} err="failed to get container status \"70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec\": rpc error: code = NotFound desc = could not find container \"70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec\": container with ID starting with 70cc36222e98f051d01889440ab849aeb28ffd2fb79aa2ad44504d0ce3d33dec not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.151857 4708 scope.go:117] "RemoveContainer" containerID="dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.152092 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12\": container with ID starting with dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12 not found: ID does not exist" containerID="dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.152115 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12"} err="failed to get container status \"dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12\": rpc error: code = NotFound desc = could not find container \"dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12\": container with ID starting with dbe418d24ea81b93ae21ca19618c2d0bb6fc7b041b6e3c392b1b789b8b7d6b12 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.152142 4708 scope.go:117] "RemoveContainer" containerID="c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.154958 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5c38d70c-968f-44dd-b42b-013bc033debb" (UID: "5c38d70c-968f-44dd-b42b-013bc033debb"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.165504 4708 scope.go:117] "RemoveContainer" containerID="dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.178809 4708 scope.go:117] "RemoveContainer" containerID="ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.185381 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5710135c-fd59-4ff6-b74a-ad7ab8730aff" (UID: "5710135c-fd59-4ff6-b74a-ad7ab8730aff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.193477 4708 scope.go:117] "RemoveContainer" containerID="c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.193793 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526\": container with ID starting with c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526 not found: ID does not exist" containerID="c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.193834 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526"} err="failed to get container status \"c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526\": rpc error: code = NotFound desc = could not find container \"c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526\": container with ID starting with c846e86955b2a75c54223b100b1e363379051dd705c7c97ec5ad027d05a26526 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.193876 4708 scope.go:117] "RemoveContainer" containerID="dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.194169 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858\": container with ID starting with dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858 not found: ID does not exist" containerID="dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.194188 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858"} err="failed to get container status \"dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858\": rpc error: code = NotFound desc = could not find container \"dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858\": container with ID starting with dd6c8aca832963915c1709ac87b9c1661834af02abe6f8e65645584b9b6cc858 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.194199 4708 scope.go:117] "RemoveContainer" containerID="ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2" Feb 27 17:00:20 crc kubenswrapper[4708]: E0227 17:00:20.194436 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2\": container with ID starting with ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2 not found: ID does not exist" containerID="ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.194472 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2"} err="failed to get container status \"ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2\": rpc error: code = NotFound desc = could not find container \"ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2\": container with ID starting with ca00f4aeed7628f76ae0a610fe0bc66fbb5ec699b4d37356003dc41335e77ff2 not found: ID does not exist" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230472 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4g9c\" (UniqueName: \"kubernetes.io/projected/5710135c-fd59-4ff6-b74a-ad7ab8730aff-kube-api-access-q4g9c\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230505 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230523 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230541 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5710135c-fd59-4ff6-b74a-ad7ab8730aff-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230557 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5c38d70c-968f-44dd-b42b-013bc033debb-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230574 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tq7h6\" (UniqueName: \"kubernetes.io/projected/5c38d70c-968f-44dd-b42b-013bc033debb-kube-api-access-tq7h6\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230594 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.230612 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4czbr\" (UniqueName: \"kubernetes.io/projected/96160365-88cf-419c-a2d2-04818cde5016-kube-api-access-4czbr\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.254358 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96160365-88cf-419c-a2d2-04818cde5016" (UID: "96160365-88cf-419c-a2d2-04818cde5016"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.289357 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zvqlm"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.297548 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zvqlm"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.303190 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lzlm4"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.308994 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-lzlm4"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.326544 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-wvdjp"] Feb 27 17:00:20 crc kubenswrapper[4708]: W0227 17:00:20.329070 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0004cd70_bc98_40ac_b46e_54e84ba076d5.slice/crio-289934bdbd54142d35230e13e770a2571693cccccd29636ec15354006d28072c WatchSource:0}: Error finding container 289934bdbd54142d35230e13e770a2571693cccccd29636ec15354006d28072c: Status 404 returned error can't find the container with id 289934bdbd54142d35230e13e770a2571693cccccd29636ec15354006d28072c Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.331318 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96160365-88cf-419c-a2d2-04818cde5016-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.342258 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7rtdw"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.345723 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7rtdw"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.360954 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5lwl"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.364368 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5lwl"] Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.835829 4708 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-lzlm4 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 17:00:20 crc kubenswrapper[4708]: I0227 17:00:20.835898 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-lzlm4" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.33:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.048251 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" event={"ID":"0004cd70-bc98-40ac-b46e-54e84ba076d5","Type":"ContainerStarted","Data":"46b4453bfb47c96c62e3d37ce75e4f4523a51c49d90a68816788c0e4e191eccd"} Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.048308 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" event={"ID":"0004cd70-bc98-40ac-b46e-54e84ba076d5","Type":"ContainerStarted","Data":"289934bdbd54142d35230e13e770a2571693cccccd29636ec15354006d28072c"} Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.048983 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.052640 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.053504 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lmzsx" event={"ID":"96160365-88cf-419c-a2d2-04818cde5016","Type":"ContainerDied","Data":"ea224760ab242c4b2f7e13a45af44649e30ebe5272193b5ccb839038aeaf37e0"} Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.053549 4708 scope.go:117] "RemoveContainer" containerID="a1c2669b0f45732a8d1f0bafb53b7294fa0c3e0072e535cc2721904b5fc7b17e" Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.052733 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lmzsx" Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.082239 4708 scope.go:117] "RemoveContainer" containerID="4d1a7f7d50dc287f86aef8a570e7dd7ec147b73a9b99c9ce8a69153aec0236cc" Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.093601 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-wvdjp" podStartSLOduration=2.093581336 podStartE2EDuration="2.093581336s" podCreationTimestamp="2026-02-27 17:00:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:00:21.074449289 +0000 UTC m=+419.590246896" watchObservedRunningTime="2026-02-27 17:00:21.093581336 +0000 UTC m=+419.609378923" Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.107448 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lmzsx"] Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.111717 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lmzsx"] Feb 27 17:00:21 crc kubenswrapper[4708]: I0227 17:00:21.121921 4708 scope.go:117] "RemoveContainer" containerID="ee27923e89f621ba2573099938ab38bd367b986ea4e351460267ac6b5a73757c" Feb 27 17:00:22 crc kubenswrapper[4708]: I0227 17:00:22.252145 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" path="/var/lib/kubelet/pods/5710135c-fd59-4ff6-b74a-ad7ab8730aff/volumes" Feb 27 17:00:22 crc kubenswrapper[4708]: I0227 17:00:22.253623 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" path="/var/lib/kubelet/pods/5c38d70c-968f-44dd-b42b-013bc033debb/volumes" Feb 27 17:00:22 crc kubenswrapper[4708]: I0227 17:00:22.254842 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" path="/var/lib/kubelet/pods/84260b20-4df9-4dea-9524-bd9c18ef7074/volumes" Feb 27 17:00:22 crc kubenswrapper[4708]: I0227 17:00:22.256641 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96160365-88cf-419c-a2d2-04818cde5016" path="/var/lib/kubelet/pods/96160365-88cf-419c-a2d2-04818cde5016/volumes" Feb 27 17:00:22 crc kubenswrapper[4708]: I0227 17:00:22.257810 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" path="/var/lib/kubelet/pods/9b733486-f273-4bd5-afa3-d35d3d1feafc/volumes" Feb 27 17:00:35 crc kubenswrapper[4708]: I0227 17:00:35.632289 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:00:35 crc kubenswrapper[4708]: I0227 17:00:35.633009 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.414234 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4vlld"] Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415079 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415095 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415104 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415110 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415120 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415126 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415137 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415143 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415150 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415156 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415170 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415176 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415184 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415190 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415200 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415206 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415215 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415222 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415231 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415237 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415243 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415250 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415256 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415263 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="extract-content" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415273 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415279 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="extract-utilities" Feb 27 17:00:44 crc kubenswrapper[4708]: E0227 17:00:44.415287 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415293 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415391 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415399 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="84260b20-4df9-4dea-9524-bd9c18ef7074" containerName="marketplace-operator" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415408 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b733486-f273-4bd5-afa3-d35d3d1feafc" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415417 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c38d70c-968f-44dd-b42b-013bc033debb" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415426 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5710135c-fd59-4ff6-b74a-ad7ab8730aff" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.415433 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="96160365-88cf-419c-a2d2-04818cde5016" containerName="registry-server" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.416126 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.422932 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.445767 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vlld"] Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.474072 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46frp\" (UniqueName: \"kubernetes.io/projected/1159c76c-e814-4a91-a99f-2b18b6758214-kube-api-access-46frp\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.474232 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1159c76c-e814-4a91-a99f-2b18b6758214-utilities\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.474289 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1159c76c-e814-4a91-a99f-2b18b6758214-catalog-content\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.575366 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1159c76c-e814-4a91-a99f-2b18b6758214-utilities\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.575477 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1159c76c-e814-4a91-a99f-2b18b6758214-catalog-content\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.575595 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46frp\" (UniqueName: \"kubernetes.io/projected/1159c76c-e814-4a91-a99f-2b18b6758214-kube-api-access-46frp\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.576220 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1159c76c-e814-4a91-a99f-2b18b6758214-utilities\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.576315 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1159c76c-e814-4a91-a99f-2b18b6758214-catalog-content\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.608785 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46frp\" (UniqueName: \"kubernetes.io/projected/1159c76c-e814-4a91-a99f-2b18b6758214-kube-api-access-46frp\") pod \"redhat-operators-4vlld\" (UID: \"1159c76c-e814-4a91-a99f-2b18b6758214\") " pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.612671 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-x8lns"] Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.614825 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.618558 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.642755 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8lns"] Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.747661 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.778395 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sz4v2\" (UniqueName: \"kubernetes.io/projected/f25135e1-5701-4932-a01a-4e5f550181e6-kube-api-access-sz4v2\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.779074 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-catalog-content\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.779176 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-utilities\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.880320 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sz4v2\" (UniqueName: \"kubernetes.io/projected/f25135e1-5701-4932-a01a-4e5f550181e6-kube-api-access-sz4v2\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.880473 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-catalog-content\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.880549 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-utilities\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.881329 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-utilities\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.881677 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-catalog-content\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.903691 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sz4v2\" (UniqueName: \"kubernetes.io/projected/f25135e1-5701-4932-a01a-4e5f550181e6-kube-api-access-sz4v2\") pod \"certified-operators-x8lns\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:44 crc kubenswrapper[4708]: I0227 17:00:44.982142 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:45 crc kubenswrapper[4708]: I0227 17:00:45.227131 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-x8lns"] Feb 27 17:00:45 crc kubenswrapper[4708]: W0227 17:00:45.232420 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf25135e1_5701_4932_a01a_4e5f550181e6.slice/crio-6a8cba260dc4ad8f2c3d1d9f5e39d8612b0788f23d94f138a0316de149ef8c47 WatchSource:0}: Error finding container 6a8cba260dc4ad8f2c3d1d9f5e39d8612b0788f23d94f138a0316de149ef8c47: Status 404 returned error can't find the container with id 6a8cba260dc4ad8f2c3d1d9f5e39d8612b0788f23d94f138a0316de149ef8c47 Feb 27 17:00:45 crc kubenswrapper[4708]: I0227 17:00:45.275019 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4vlld"] Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.232227 4708 generic.go:334] "Generic (PLEG): container finished" podID="1159c76c-e814-4a91-a99f-2b18b6758214" containerID="f7c4a72e1759a393a1f64d3534b80ae0ee04405b57c60b960a92ffa3f57db095" exitCode=0 Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.241133 4708 generic.go:334] "Generic (PLEG): container finished" podID="f25135e1-5701-4932-a01a-4e5f550181e6" containerID="26c58fcd272fda410e18794b274a2d964b17785e108fb665a0e3b60f2281c070" exitCode=0 Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.242328 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vlld" event={"ID":"1159c76c-e814-4a91-a99f-2b18b6758214","Type":"ContainerDied","Data":"f7c4a72e1759a393a1f64d3534b80ae0ee04405b57c60b960a92ffa3f57db095"} Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.242372 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vlld" event={"ID":"1159c76c-e814-4a91-a99f-2b18b6758214","Type":"ContainerStarted","Data":"fecd57f6c159686b32840d908c5fe024864832daceba020cf22f9b0ad721ce0a"} Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.242391 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lns" event={"ID":"f25135e1-5701-4932-a01a-4e5f550181e6","Type":"ContainerDied","Data":"26c58fcd272fda410e18794b274a2d964b17785e108fb665a0e3b60f2281c070"} Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.242412 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lns" event={"ID":"f25135e1-5701-4932-a01a-4e5f550181e6","Type":"ContainerStarted","Data":"6a8cba260dc4ad8f2c3d1d9f5e39d8612b0788f23d94f138a0316de149ef8c47"} Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.809494 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-22xt4"] Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.810381 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.812167 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.821350 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22xt4"] Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.905506 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7917d39c-2ac3-45d4-817d-d0722e37c5a5-utilities\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.905621 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csjw7\" (UniqueName: \"kubernetes.io/projected/7917d39c-2ac3-45d4-817d-d0722e37c5a5-kube-api-access-csjw7\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:46 crc kubenswrapper[4708]: I0227 17:00:46.905665 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7917d39c-2ac3-45d4-817d-d0722e37c5a5-catalog-content\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.006172 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csjw7\" (UniqueName: \"kubernetes.io/projected/7917d39c-2ac3-45d4-817d-d0722e37c5a5-kube-api-access-csjw7\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.006215 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7917d39c-2ac3-45d4-817d-d0722e37c5a5-catalog-content\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.006294 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7917d39c-2ac3-45d4-817d-d0722e37c5a5-utilities\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.006726 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7917d39c-2ac3-45d4-817d-d0722e37c5a5-catalog-content\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.007123 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7917d39c-2ac3-45d4-817d-d0722e37c5a5-utilities\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.009650 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-jqppg"] Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.011489 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.013517 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.027261 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqppg"] Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.039549 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csjw7\" (UniqueName: \"kubernetes.io/projected/7917d39c-2ac3-45d4-817d-d0722e37c5a5-kube-api-access-csjw7\") pod \"community-operators-22xt4\" (UID: \"7917d39c-2ac3-45d4-817d-d0722e37c5a5\") " pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.108083 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5688f78b-5e14-4ff7-83d1-681f44a1273e-utilities\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.108140 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7n95t\" (UniqueName: \"kubernetes.io/projected/5688f78b-5e14-4ff7-83d1-681f44a1273e-kube-api-access-7n95t\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.108281 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5688f78b-5e14-4ff7-83d1-681f44a1273e-catalog-content\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.136252 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.208726 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5688f78b-5e14-4ff7-83d1-681f44a1273e-catalog-content\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.208837 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5688f78b-5e14-4ff7-83d1-681f44a1273e-utilities\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.208865 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7n95t\" (UniqueName: \"kubernetes.io/projected/5688f78b-5e14-4ff7-83d1-681f44a1273e-kube-api-access-7n95t\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.209533 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5688f78b-5e14-4ff7-83d1-681f44a1273e-catalog-content\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.209572 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5688f78b-5e14-4ff7-83d1-681f44a1273e-utilities\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.239511 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7n95t\" (UniqueName: \"kubernetes.io/projected/5688f78b-5e14-4ff7-83d1-681f44a1273e-kube-api-access-7n95t\") pod \"redhat-marketplace-jqppg\" (UID: \"5688f78b-5e14-4ff7-83d1-681f44a1273e\") " pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.326371 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.521069 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-jqppg"] Feb 27 17:00:47 crc kubenswrapper[4708]: W0227 17:00:47.529799 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5688f78b_5e14_4ff7_83d1_681f44a1273e.slice/crio-43aeeefb652aeb0db9f9e3cad5c39bc5bc28fc9865495813558b1d39aa8429c4 WatchSource:0}: Error finding container 43aeeefb652aeb0db9f9e3cad5c39bc5bc28fc9865495813558b1d39aa8429c4: Status 404 returned error can't find the container with id 43aeeefb652aeb0db9f9e3cad5c39bc5bc28fc9865495813558b1d39aa8429c4 Feb 27 17:00:47 crc kubenswrapper[4708]: I0227 17:00:47.652937 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-22xt4"] Feb 27 17:00:47 crc kubenswrapper[4708]: W0227 17:00:47.658812 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7917d39c_2ac3_45d4_817d_d0722e37c5a5.slice/crio-d98084521b30b7e8902e93f354ea19c6c75b7666f19ec952c613ab8c9e478e66 WatchSource:0}: Error finding container d98084521b30b7e8902e93f354ea19c6c75b7666f19ec952c613ab8c9e478e66: Status 404 returned error can't find the container with id d98084521b30b7e8902e93f354ea19c6c75b7666f19ec952c613ab8c9e478e66 Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.255039 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vlld" event={"ID":"1159c76c-e814-4a91-a99f-2b18b6758214","Type":"ContainerStarted","Data":"5ecc9bab0ef3cac730c8133f52f5f975068475b5014250b73987bce02d93ad32"} Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.259034 4708 generic.go:334] "Generic (PLEG): container finished" podID="f25135e1-5701-4932-a01a-4e5f550181e6" containerID="eb6463fcf9fc27141a1381f9f74eea999173162c6c571b3bd4f5b25b56d34941" exitCode=0 Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.259100 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lns" event={"ID":"f25135e1-5701-4932-a01a-4e5f550181e6","Type":"ContainerDied","Data":"eb6463fcf9fc27141a1381f9f74eea999173162c6c571b3bd4f5b25b56d34941"} Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.261181 4708 generic.go:334] "Generic (PLEG): container finished" podID="5688f78b-5e14-4ff7-83d1-681f44a1273e" containerID="85e1c32eb0855cd7571265a4e15f4cbd4600baa53f58e4fed521dde8f5612c65" exitCode=0 Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.261231 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqppg" event={"ID":"5688f78b-5e14-4ff7-83d1-681f44a1273e","Type":"ContainerDied","Data":"85e1c32eb0855cd7571265a4e15f4cbd4600baa53f58e4fed521dde8f5612c65"} Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.261281 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqppg" event={"ID":"5688f78b-5e14-4ff7-83d1-681f44a1273e","Type":"ContainerStarted","Data":"43aeeefb652aeb0db9f9e3cad5c39bc5bc28fc9865495813558b1d39aa8429c4"} Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.263017 4708 generic.go:334] "Generic (PLEG): container finished" podID="7917d39c-2ac3-45d4-817d-d0722e37c5a5" containerID="2d9bef948e5e3b9bd07868b7e71dc2931474d9c1436a92a6945b57a4da277f09" exitCode=0 Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.263056 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22xt4" event={"ID":"7917d39c-2ac3-45d4-817d-d0722e37c5a5","Type":"ContainerDied","Data":"2d9bef948e5e3b9bd07868b7e71dc2931474d9c1436a92a6945b57a4da277f09"} Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.263081 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22xt4" event={"ID":"7917d39c-2ac3-45d4-817d-d0722e37c5a5","Type":"ContainerStarted","Data":"d98084521b30b7e8902e93f354ea19c6c75b7666f19ec952c613ab8c9e478e66"} Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.804835 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-27vf5"] Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.805480 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.821431 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-27vf5"] Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.964736 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-registry-tls\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.965061 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/da22d6f3-c741-4e50-87a6-2308b2ec6db2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.965094 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/da22d6f3-c741-4e50-87a6-2308b2ec6db2-registry-certificates\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.965114 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sppqx\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-kube-api-access-sppqx\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.965130 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da22d6f3-c741-4e50-87a6-2308b2ec6db2-trusted-ca\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.965151 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/da22d6f3-c741-4e50-87a6-2308b2ec6db2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.965175 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:48 crc kubenswrapper[4708]: I0227 17:00:48.965278 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-bound-sa-token\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.002467 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.067019 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-registry-tls\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.067073 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/da22d6f3-c741-4e50-87a6-2308b2ec6db2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.067102 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/da22d6f3-c741-4e50-87a6-2308b2ec6db2-registry-certificates\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.067125 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sppqx\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-kube-api-access-sppqx\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.067143 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da22d6f3-c741-4e50-87a6-2308b2ec6db2-trusted-ca\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.067163 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/da22d6f3-c741-4e50-87a6-2308b2ec6db2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.067182 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-bound-sa-token\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.068225 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/da22d6f3-c741-4e50-87a6-2308b2ec6db2-ca-trust-extracted\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.068632 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/da22d6f3-c741-4e50-87a6-2308b2ec6db2-registry-certificates\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.069133 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/da22d6f3-c741-4e50-87a6-2308b2ec6db2-trusted-ca\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.072610 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-registry-tls\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.074707 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/da22d6f3-c741-4e50-87a6-2308b2ec6db2-installation-pull-secrets\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.083734 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sppqx\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-kube-api-access-sppqx\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.090291 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/da22d6f3-c741-4e50-87a6-2308b2ec6db2-bound-sa-token\") pod \"image-registry-66df7c8f76-27vf5\" (UID: \"da22d6f3-c741-4e50-87a6-2308b2ec6db2\") " pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.128607 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.282100 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lns" event={"ID":"f25135e1-5701-4932-a01a-4e5f550181e6","Type":"ContainerStarted","Data":"82f081b6e6613e60635ce63ec60cba6a71715baf9d3524c9d638b7d1aabae47b"} Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.284658 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqppg" event={"ID":"5688f78b-5e14-4ff7-83d1-681f44a1273e","Type":"ContainerStarted","Data":"f41e7830676d0e3d89b0a7392d54eabf00a674017e434dc7ee68e9edbbeb58b9"} Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.286706 4708 generic.go:334] "Generic (PLEG): container finished" podID="1159c76c-e814-4a91-a99f-2b18b6758214" containerID="5ecc9bab0ef3cac730c8133f52f5f975068475b5014250b73987bce02d93ad32" exitCode=0 Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.286742 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vlld" event={"ID":"1159c76c-e814-4a91-a99f-2b18b6758214","Type":"ContainerDied","Data":"5ecc9bab0ef3cac730c8133f52f5f975068475b5014250b73987bce02d93ad32"} Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.299583 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-x8lns" podStartSLOduration=2.885907682 podStartE2EDuration="5.299566167s" podCreationTimestamp="2026-02-27 17:00:44 +0000 UTC" firstStartedPulling="2026-02-27 17:00:46.243458936 +0000 UTC m=+444.759256533" lastFinishedPulling="2026-02-27 17:00:48.657117401 +0000 UTC m=+447.172915018" observedRunningTime="2026-02-27 17:00:49.296381255 +0000 UTC m=+447.812178842" watchObservedRunningTime="2026-02-27 17:00:49.299566167 +0000 UTC m=+447.815363754" Feb 27 17:00:49 crc kubenswrapper[4708]: I0227 17:00:49.378670 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-27vf5"] Feb 27 17:00:49 crc kubenswrapper[4708]: W0227 17:00:49.434814 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda22d6f3_c741_4e50_87a6_2308b2ec6db2.slice/crio-7e340663a42b461762b528b0b9424df3fef2c2c1b0a3608212354003e399d891 WatchSource:0}: Error finding container 7e340663a42b461762b528b0b9424df3fef2c2c1b0a3608212354003e399d891: Status 404 returned error can't find the container with id 7e340663a42b461762b528b0b9424df3fef2c2c1b0a3608212354003e399d891 Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.292600 4708 generic.go:334] "Generic (PLEG): container finished" podID="7917d39c-2ac3-45d4-817d-d0722e37c5a5" containerID="000e0b3782265827767df6f22984b9b9c605b00ba09c9dd82dbeee5268374802" exitCode=0 Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.292952 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22xt4" event={"ID":"7917d39c-2ac3-45d4-817d-d0722e37c5a5","Type":"ContainerDied","Data":"000e0b3782265827767df6f22984b9b9c605b00ba09c9dd82dbeee5268374802"} Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.296521 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4vlld" event={"ID":"1159c76c-e814-4a91-a99f-2b18b6758214","Type":"ContainerStarted","Data":"e0029450a0ae94d919c172c92017f5bd2bc10768cfc0597dad3dc95e9f6a77fc"} Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.299384 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" event={"ID":"da22d6f3-c741-4e50-87a6-2308b2ec6db2","Type":"ContainerStarted","Data":"9fd9c313c6f5ef183371a503223fc884ef1e189710c98d616d6b2411a010dda2"} Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.299416 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" event={"ID":"da22d6f3-c741-4e50-87a6-2308b2ec6db2","Type":"ContainerStarted","Data":"7e340663a42b461762b528b0b9424df3fef2c2c1b0a3608212354003e399d891"} Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.299827 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.301783 4708 generic.go:334] "Generic (PLEG): container finished" podID="5688f78b-5e14-4ff7-83d1-681f44a1273e" containerID="f41e7830676d0e3d89b0a7392d54eabf00a674017e434dc7ee68e9edbbeb58b9" exitCode=0 Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.302294 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqppg" event={"ID":"5688f78b-5e14-4ff7-83d1-681f44a1273e","Type":"ContainerDied","Data":"f41e7830676d0e3d89b0a7392d54eabf00a674017e434dc7ee68e9edbbeb58b9"} Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.333988 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4vlld" podStartSLOduration=2.761791582 podStartE2EDuration="6.333969513s" podCreationTimestamp="2026-02-27 17:00:44 +0000 UTC" firstStartedPulling="2026-02-27 17:00:46.236503474 +0000 UTC m=+444.752301091" lastFinishedPulling="2026-02-27 17:00:49.808681435 +0000 UTC m=+448.324479022" observedRunningTime="2026-02-27 17:00:50.33318233 +0000 UTC m=+448.848979917" watchObservedRunningTime="2026-02-27 17:00:50.333969513 +0000 UTC m=+448.849767110" Feb 27 17:00:50 crc kubenswrapper[4708]: I0227 17:00:50.376557 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" podStartSLOduration=2.376538001 podStartE2EDuration="2.376538001s" podCreationTimestamp="2026-02-27 17:00:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:00:50.351801042 +0000 UTC m=+448.867598629" watchObservedRunningTime="2026-02-27 17:00:50.376538001 +0000 UTC m=+448.892335598" Feb 27 17:00:51 crc kubenswrapper[4708]: I0227 17:00:51.309332 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-jqppg" event={"ID":"5688f78b-5e14-4ff7-83d1-681f44a1273e","Type":"ContainerStarted","Data":"0ee727c29a7d5071c386c67bb1924a01d165dad497c4c4ec7c3b5b5666500961"} Feb 27 17:00:51 crc kubenswrapper[4708]: I0227 17:00:51.311605 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-22xt4" event={"ID":"7917d39c-2ac3-45d4-817d-d0722e37c5a5","Type":"ContainerStarted","Data":"b42b106855f19a4268418f24cc74a7ab86cf8caef0e567c0fefa1bcb335a96b8"} Feb 27 17:00:51 crc kubenswrapper[4708]: I0227 17:00:51.339351 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-jqppg" podStartSLOduration=2.853911915 podStartE2EDuration="5.339336406s" podCreationTimestamp="2026-02-27 17:00:46 +0000 UTC" firstStartedPulling="2026-02-27 17:00:48.265041937 +0000 UTC m=+446.780839574" lastFinishedPulling="2026-02-27 17:00:50.750466478 +0000 UTC m=+449.266264065" observedRunningTime="2026-02-27 17:00:51.334759493 +0000 UTC m=+449.850557080" watchObservedRunningTime="2026-02-27 17:00:51.339336406 +0000 UTC m=+449.855133983" Feb 27 17:00:51 crc kubenswrapper[4708]: I0227 17:00:51.353629 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-22xt4" podStartSLOduration=2.932657165 podStartE2EDuration="5.353610851s" podCreationTimestamp="2026-02-27 17:00:46 +0000 UTC" firstStartedPulling="2026-02-27 17:00:48.265088068 +0000 UTC m=+446.780885655" lastFinishedPulling="2026-02-27 17:00:50.686041754 +0000 UTC m=+449.201839341" observedRunningTime="2026-02-27 17:00:51.352505639 +0000 UTC m=+449.868303226" watchObservedRunningTime="2026-02-27 17:00:51.353610851 +0000 UTC m=+449.869408438" Feb 27 17:00:54 crc kubenswrapper[4708]: I0227 17:00:54.748676 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:54 crc kubenswrapper[4708]: I0227 17:00:54.749195 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:00:54 crc kubenswrapper[4708]: I0227 17:00:54.983133 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:54 crc kubenswrapper[4708]: I0227 17:00:54.983326 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:55 crc kubenswrapper[4708]: I0227 17:00:55.056110 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:55 crc kubenswrapper[4708]: I0227 17:00:55.372573 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:00:55 crc kubenswrapper[4708]: I0227 17:00:55.802209 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4vlld" podUID="1159c76c-e814-4a91-a99f-2b18b6758214" containerName="registry-server" probeResult="failure" output=< Feb 27 17:00:55 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:00:55 crc kubenswrapper[4708]: > Feb 27 17:00:57 crc kubenswrapper[4708]: I0227 17:00:57.137508 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:57 crc kubenswrapper[4708]: I0227 17:00:57.138129 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:57 crc kubenswrapper[4708]: I0227 17:00:57.191976 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:57 crc kubenswrapper[4708]: I0227 17:00:57.326741 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:57 crc kubenswrapper[4708]: I0227 17:00:57.326908 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:57 crc kubenswrapper[4708]: I0227 17:00:57.392608 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-22xt4" Feb 27 17:00:57 crc kubenswrapper[4708]: I0227 17:00:57.393168 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:00:58 crc kubenswrapper[4708]: I0227 17:00:58.416901 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-jqppg" Feb 27 17:01:04 crc kubenswrapper[4708]: I0227 17:01:04.820287 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:01:04 crc kubenswrapper[4708]: I0227 17:01:04.871423 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4vlld" Feb 27 17:01:05 crc kubenswrapper[4708]: I0227 17:01:05.632022 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:01:05 crc kubenswrapper[4708]: I0227 17:01:05.632386 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:01:05 crc kubenswrapper[4708]: I0227 17:01:05.632451 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:01:05 crc kubenswrapper[4708]: I0227 17:01:05.633184 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bded9136f5ebbabd06a46307fbd007f7b15f87dcb532cd3c37c1fe08d4c6e0ab"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:01:05 crc kubenswrapper[4708]: I0227 17:01:05.633288 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://bded9136f5ebbabd06a46307fbd007f7b15f87dcb532cd3c37c1fe08d4c6e0ab" gracePeriod=600 Feb 27 17:01:06 crc kubenswrapper[4708]: I0227 17:01:06.397470 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="bded9136f5ebbabd06a46307fbd007f7b15f87dcb532cd3c37c1fe08d4c6e0ab" exitCode=0 Feb 27 17:01:06 crc kubenswrapper[4708]: I0227 17:01:06.397553 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"bded9136f5ebbabd06a46307fbd007f7b15f87dcb532cd3c37c1fe08d4c6e0ab"} Feb 27 17:01:06 crc kubenswrapper[4708]: I0227 17:01:06.397872 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"e83f2b773e936bb9a13f65de3b8f952c3975449adc00a4f209961e8bb7a647c2"} Feb 27 17:01:06 crc kubenswrapper[4708]: I0227 17:01:06.397897 4708 scope.go:117] "RemoveContainer" containerID="24f1fc3696b1002f4566bbdfd154763184905b98e2d3e513411705f6547eb53f" Feb 27 17:01:09 crc kubenswrapper[4708]: I0227 17:01:09.135413 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-27vf5" Feb 27 17:01:09 crc kubenswrapper[4708]: I0227 17:01:09.187413 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89q5w"] Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.229311 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" podUID="e11dd889-39c0-43fc-aae8-fef332bad5ed" containerName="registry" containerID="cri-o://12cebafa1a507c3f4ff844d79a3b0b287ff7130b6df079faa50db413e374c33f" gracePeriod=30 Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.617928 4708 generic.go:334] "Generic (PLEG): container finished" podID="e11dd889-39c0-43fc-aae8-fef332bad5ed" containerID="12cebafa1a507c3f4ff844d79a3b0b287ff7130b6df079faa50db413e374c33f" exitCode=0 Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.617984 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" event={"ID":"e11dd889-39c0-43fc-aae8-fef332bad5ed","Type":"ContainerDied","Data":"12cebafa1a507c3f4ff844d79a3b0b287ff7130b6df079faa50db413e374c33f"} Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.677564 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765684 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765728 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-tls\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765754 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e11dd889-39c0-43fc-aae8-fef332bad5ed-ca-trust-extracted\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765824 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzcvm\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-kube-api-access-tzcvm\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765866 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-certificates\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765900 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e11dd889-39c0-43fc-aae8-fef332bad5ed-installation-pull-secrets\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765924 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-bound-sa-token\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.765945 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-trusted-ca\") pod \"e11dd889-39c0-43fc-aae8-fef332bad5ed\" (UID: \"e11dd889-39c0-43fc-aae8-fef332bad5ed\") " Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.767099 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.768195 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.777657 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e11dd889-39c0-43fc-aae8-fef332bad5ed-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.777723 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.778204 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.778664 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-kube-api-access-tzcvm" (OuterVolumeSpecName: "kube-api-access-tzcvm") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "kube-api-access-tzcvm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.782011 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.804072 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e11dd889-39c0-43fc-aae8-fef332bad5ed-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e11dd889-39c0-43fc-aae8-fef332bad5ed" (UID: "e11dd889-39c0-43fc-aae8-fef332bad5ed"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.867134 4708 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-tls\") on node \"crc\" DevicePath \"\"" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.867161 4708 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e11dd889-39c0-43fc-aae8-fef332bad5ed-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.867174 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzcvm\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-kube-api-access-tzcvm\") on node \"crc\" DevicePath \"\"" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.867190 4708 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-registry-certificates\") on node \"crc\" DevicePath \"\"" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.867202 4708 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e11dd889-39c0-43fc-aae8-fef332bad5ed-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.867213 4708 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e11dd889-39c0-43fc-aae8-fef332bad5ed-bound-sa-token\") on node \"crc\" DevicePath \"\"" Feb 27 17:01:34 crc kubenswrapper[4708]: I0227 17:01:34.867225 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e11dd889-39c0-43fc-aae8-fef332bad5ed-trusted-ca\") on node \"crc\" DevicePath \"\"" Feb 27 17:01:35 crc kubenswrapper[4708]: I0227 17:01:35.628261 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" event={"ID":"e11dd889-39c0-43fc-aae8-fef332bad5ed","Type":"ContainerDied","Data":"4372a6e02ae0ecc2db3a805029d885f7e27aad76c499894849a37edf1ef04a06"} Feb 27 17:01:35 crc kubenswrapper[4708]: I0227 17:01:35.628361 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-89q5w" Feb 27 17:01:35 crc kubenswrapper[4708]: I0227 17:01:35.629336 4708 scope.go:117] "RemoveContainer" containerID="12cebafa1a507c3f4ff844d79a3b0b287ff7130b6df079faa50db413e374c33f" Feb 27 17:01:35 crc kubenswrapper[4708]: I0227 17:01:35.676028 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89q5w"] Feb 27 17:01:35 crc kubenswrapper[4708]: I0227 17:01:35.683131 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-89q5w"] Feb 27 17:01:36 crc kubenswrapper[4708]: I0227 17:01:36.240183 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e11dd889-39c0-43fc-aae8-fef332bad5ed" path="/var/lib/kubelet/pods/e11dd889-39c0-43fc-aae8-fef332bad5ed/volumes" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.145100 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536862-qkswc"] Feb 27 17:02:00 crc kubenswrapper[4708]: E0227 17:02:00.146827 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e11dd889-39c0-43fc-aae8-fef332bad5ed" containerName="registry" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.146873 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e11dd889-39c0-43fc-aae8-fef332bad5ed" containerName="registry" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.147046 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e11dd889-39c0-43fc-aae8-fef332bad5ed" containerName="registry" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.147632 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-qkswc" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.152527 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.153130 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.153215 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.154470 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-qkswc"] Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.244951 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghb96\" (UniqueName: \"kubernetes.io/projected/fcfc32db-1f2d-454c-ac76-baba5f5423f6-kube-api-access-ghb96\") pod \"auto-csr-approver-29536862-qkswc\" (UID: \"fcfc32db-1f2d-454c-ac76-baba5f5423f6\") " pod="openshift-infra/auto-csr-approver-29536862-qkswc" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.346418 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghb96\" (UniqueName: \"kubernetes.io/projected/fcfc32db-1f2d-454c-ac76-baba5f5423f6-kube-api-access-ghb96\") pod \"auto-csr-approver-29536862-qkswc\" (UID: \"fcfc32db-1f2d-454c-ac76-baba5f5423f6\") " pod="openshift-infra/auto-csr-approver-29536862-qkswc" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.380934 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghb96\" (UniqueName: \"kubernetes.io/projected/fcfc32db-1f2d-454c-ac76-baba5f5423f6-kube-api-access-ghb96\") pod \"auto-csr-approver-29536862-qkswc\" (UID: \"fcfc32db-1f2d-454c-ac76-baba5f5423f6\") " pod="openshift-infra/auto-csr-approver-29536862-qkswc" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.473738 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-qkswc" Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.749688 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-qkswc"] Feb 27 17:02:00 crc kubenswrapper[4708]: W0227 17:02:00.760495 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfcfc32db_1f2d_454c_ac76_baba5f5423f6.slice/crio-c2c93aba2a05896028a52ed9ad9bc9fc3a9613c280a04c3f7b3cd625d65faad5 WatchSource:0}: Error finding container c2c93aba2a05896028a52ed9ad9bc9fc3a9613c280a04c3f7b3cd625d65faad5: Status 404 returned error can't find the container with id c2c93aba2a05896028a52ed9ad9bc9fc3a9613c280a04c3f7b3cd625d65faad5 Feb 27 17:02:00 crc kubenswrapper[4708]: I0227 17:02:00.808994 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536862-qkswc" event={"ID":"fcfc32db-1f2d-454c-ac76-baba5f5423f6","Type":"ContainerStarted","Data":"c2c93aba2a05896028a52ed9ad9bc9fc3a9613c280a04c3f7b3cd625d65faad5"} Feb 27 17:02:02 crc kubenswrapper[4708]: I0227 17:02:02.826002 4708 generic.go:334] "Generic (PLEG): container finished" podID="fcfc32db-1f2d-454c-ac76-baba5f5423f6" containerID="665b03431ec166b8505cdc2e5a8f29e173ed7bdbbfe9cf74fe04d7744bd0872f" exitCode=0 Feb 27 17:02:02 crc kubenswrapper[4708]: I0227 17:02:02.826089 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536862-qkswc" event={"ID":"fcfc32db-1f2d-454c-ac76-baba5f5423f6","Type":"ContainerDied","Data":"665b03431ec166b8505cdc2e5a8f29e173ed7bdbbfe9cf74fe04d7744bd0872f"} Feb 27 17:02:04 crc kubenswrapper[4708]: I0227 17:02:04.162195 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-qkswc" Feb 27 17:02:04 crc kubenswrapper[4708]: I0227 17:02:04.303677 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghb96\" (UniqueName: \"kubernetes.io/projected/fcfc32db-1f2d-454c-ac76-baba5f5423f6-kube-api-access-ghb96\") pod \"fcfc32db-1f2d-454c-ac76-baba5f5423f6\" (UID: \"fcfc32db-1f2d-454c-ac76-baba5f5423f6\") " Feb 27 17:02:04 crc kubenswrapper[4708]: I0227 17:02:04.313687 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcfc32db-1f2d-454c-ac76-baba5f5423f6-kube-api-access-ghb96" (OuterVolumeSpecName: "kube-api-access-ghb96") pod "fcfc32db-1f2d-454c-ac76-baba5f5423f6" (UID: "fcfc32db-1f2d-454c-ac76-baba5f5423f6"). InnerVolumeSpecName "kube-api-access-ghb96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:02:04 crc kubenswrapper[4708]: I0227 17:02:04.406540 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ghb96\" (UniqueName: \"kubernetes.io/projected/fcfc32db-1f2d-454c-ac76-baba5f5423f6-kube-api-access-ghb96\") on node \"crc\" DevicePath \"\"" Feb 27 17:02:04 crc kubenswrapper[4708]: I0227 17:02:04.842673 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536862-qkswc" event={"ID":"fcfc32db-1f2d-454c-ac76-baba5f5423f6","Type":"ContainerDied","Data":"c2c93aba2a05896028a52ed9ad9bc9fc3a9613c280a04c3f7b3cd625d65faad5"} Feb 27 17:02:04 crc kubenswrapper[4708]: I0227 17:02:04.843084 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2c93aba2a05896028a52ed9ad9bc9fc3a9613c280a04c3f7b3cd625d65faad5" Feb 27 17:02:04 crc kubenswrapper[4708]: I0227 17:02:04.842730 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536862-qkswc" Feb 27 17:02:05 crc kubenswrapper[4708]: I0227 17:02:05.261236 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-lj688"] Feb 27 17:02:05 crc kubenswrapper[4708]: I0227 17:02:05.270379 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536856-lj688"] Feb 27 17:02:06 crc kubenswrapper[4708]: I0227 17:02:06.240500 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8c016d5-5c1f-4680-a678-8568d218617e" path="/var/lib/kubelet/pods/c8c016d5-5c1f-4680-a678-8568d218617e/volumes" Feb 27 17:03:05 crc kubenswrapper[4708]: I0227 17:03:05.632059 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:03:05 crc kubenswrapper[4708]: I0227 17:03:05.634746 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:03:35 crc kubenswrapper[4708]: I0227 17:03:35.632467 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:03:35 crc kubenswrapper[4708]: I0227 17:03:35.633095 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.152903 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536864-ztd9p"] Feb 27 17:04:00 crc kubenswrapper[4708]: E0227 17:04:00.154122 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcfc32db-1f2d-454c-ac76-baba5f5423f6" containerName="oc" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.154145 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcfc32db-1f2d-454c-ac76-baba5f5423f6" containerName="oc" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.154342 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcfc32db-1f2d-454c-ac76-baba5f5423f6" containerName="oc" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.155074 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-ztd9p" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.158128 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.158562 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.161015 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.175297 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-ztd9p"] Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.310029 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbsnj\" (UniqueName: \"kubernetes.io/projected/ef0f4977-e298-40f1-8d1d-23ebf0111f9f-kube-api-access-zbsnj\") pod \"auto-csr-approver-29536864-ztd9p\" (UID: \"ef0f4977-e298-40f1-8d1d-23ebf0111f9f\") " pod="openshift-infra/auto-csr-approver-29536864-ztd9p" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.411138 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbsnj\" (UniqueName: \"kubernetes.io/projected/ef0f4977-e298-40f1-8d1d-23ebf0111f9f-kube-api-access-zbsnj\") pod \"auto-csr-approver-29536864-ztd9p\" (UID: \"ef0f4977-e298-40f1-8d1d-23ebf0111f9f\") " pod="openshift-infra/auto-csr-approver-29536864-ztd9p" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.447419 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbsnj\" (UniqueName: \"kubernetes.io/projected/ef0f4977-e298-40f1-8d1d-23ebf0111f9f-kube-api-access-zbsnj\") pod \"auto-csr-approver-29536864-ztd9p\" (UID: \"ef0f4977-e298-40f1-8d1d-23ebf0111f9f\") " pod="openshift-infra/auto-csr-approver-29536864-ztd9p" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.487025 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-ztd9p" Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.736993 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-ztd9p"] Feb 27 17:04:00 crc kubenswrapper[4708]: I0227 17:04:00.754748 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:04:01 crc kubenswrapper[4708]: I0227 17:04:01.686896 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536864-ztd9p" event={"ID":"ef0f4977-e298-40f1-8d1d-23ebf0111f9f","Type":"ContainerStarted","Data":"ba84b91aaab63d6721a9f15be5324ba9e314889ec5fa9569b15888ce12f71422"} Feb 27 17:04:02 crc kubenswrapper[4708]: I0227 17:04:02.699532 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef0f4977-e298-40f1-8d1d-23ebf0111f9f" containerID="39899fe21d70373809577aba9526e08716e3482cfa79929bdbe852ac9482d42a" exitCode=0 Feb 27 17:04:02 crc kubenswrapper[4708]: I0227 17:04:02.699611 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536864-ztd9p" event={"ID":"ef0f4977-e298-40f1-8d1d-23ebf0111f9f","Type":"ContainerDied","Data":"39899fe21d70373809577aba9526e08716e3482cfa79929bdbe852ac9482d42a"} Feb 27 17:04:04 crc kubenswrapper[4708]: I0227 17:04:04.066840 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-ztd9p" Feb 27 17:04:04 crc kubenswrapper[4708]: I0227 17:04:04.164503 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbsnj\" (UniqueName: \"kubernetes.io/projected/ef0f4977-e298-40f1-8d1d-23ebf0111f9f-kube-api-access-zbsnj\") pod \"ef0f4977-e298-40f1-8d1d-23ebf0111f9f\" (UID: \"ef0f4977-e298-40f1-8d1d-23ebf0111f9f\") " Feb 27 17:04:04 crc kubenswrapper[4708]: I0227 17:04:04.173522 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef0f4977-e298-40f1-8d1d-23ebf0111f9f-kube-api-access-zbsnj" (OuterVolumeSpecName: "kube-api-access-zbsnj") pod "ef0f4977-e298-40f1-8d1d-23ebf0111f9f" (UID: "ef0f4977-e298-40f1-8d1d-23ebf0111f9f"). InnerVolumeSpecName "kube-api-access-zbsnj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:04:04 crc kubenswrapper[4708]: I0227 17:04:04.265690 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbsnj\" (UniqueName: \"kubernetes.io/projected/ef0f4977-e298-40f1-8d1d-23ebf0111f9f-kube-api-access-zbsnj\") on node \"crc\" DevicePath \"\"" Feb 27 17:04:04 crc kubenswrapper[4708]: I0227 17:04:04.720812 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536864-ztd9p" event={"ID":"ef0f4977-e298-40f1-8d1d-23ebf0111f9f","Type":"ContainerDied","Data":"ba84b91aaab63d6721a9f15be5324ba9e314889ec5fa9569b15888ce12f71422"} Feb 27 17:04:04 crc kubenswrapper[4708]: I0227 17:04:04.720910 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba84b91aaab63d6721a9f15be5324ba9e314889ec5fa9569b15888ce12f71422" Feb 27 17:04:04 crc kubenswrapper[4708]: I0227 17:04:04.720921 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536864-ztd9p" Feb 27 17:04:05 crc kubenswrapper[4708]: I0227 17:04:05.144185 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-rfn4q"] Feb 27 17:04:05 crc kubenswrapper[4708]: I0227 17:04:05.151411 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536858-rfn4q"] Feb 27 17:04:05 crc kubenswrapper[4708]: I0227 17:04:05.635435 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:04:05 crc kubenswrapper[4708]: I0227 17:04:05.636428 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:04:05 crc kubenswrapper[4708]: I0227 17:04:05.636582 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:04:05 crc kubenswrapper[4708]: I0227 17:04:05.637608 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e83f2b773e936bb9a13f65de3b8f952c3975449adc00a4f209961e8bb7a647c2"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:04:05 crc kubenswrapper[4708]: I0227 17:04:05.637721 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://e83f2b773e936bb9a13f65de3b8f952c3975449adc00a4f209961e8bb7a647c2" gracePeriod=600 Feb 27 17:04:06 crc kubenswrapper[4708]: I0227 17:04:06.278497 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c99fdbbd-b661-4920-975d-c72e040d08fa" path="/var/lib/kubelet/pods/c99fdbbd-b661-4920-975d-c72e040d08fa/volumes" Feb 27 17:04:06 crc kubenswrapper[4708]: I0227 17:04:06.739569 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="e83f2b773e936bb9a13f65de3b8f952c3975449adc00a4f209961e8bb7a647c2" exitCode=0 Feb 27 17:04:06 crc kubenswrapper[4708]: I0227 17:04:06.739625 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"e83f2b773e936bb9a13f65de3b8f952c3975449adc00a4f209961e8bb7a647c2"} Feb 27 17:04:06 crc kubenswrapper[4708]: I0227 17:04:06.740122 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"73433f85f32a02d199ead494dd30f304e263e12a457d51cac8315ed1c3121a5b"} Feb 27 17:04:06 crc kubenswrapper[4708]: I0227 17:04:06.740220 4708 scope.go:117] "RemoveContainer" containerID="bded9136f5ebbabd06a46307fbd007f7b15f87dcb532cd3c37c1fe08d4c6e0ab" Feb 27 17:04:22 crc kubenswrapper[4708]: I0227 17:04:22.615688 4708 scope.go:117] "RemoveContainer" containerID="14608b95c26287a996396e9d6c80a8d07d401713ca295bed224f774de333adbd" Feb 27 17:04:22 crc kubenswrapper[4708]: I0227 17:04:22.645144 4708 scope.go:117] "RemoveContainer" containerID="7c3afdfc9ffdea31879ef9a422441c28ca30755918ab57761fd5f28be8a5469c" Feb 27 17:04:22 crc kubenswrapper[4708]: I0227 17:04:22.691336 4708 scope.go:117] "RemoveContainer" containerID="1e577a12ac8338e8a615ae393e48e602dfdf4491cf06ebec6dfae1b4cbfc399c" Feb 27 17:05:22 crc kubenswrapper[4708]: I0227 17:05:22.791008 4708 scope.go:117] "RemoveContainer" containerID="50ccc25fb701392fba2b6b461b90820ec8b4c74f3fe16296687dbf20847b1812" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.143910 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536866-n7s88"] Feb 27 17:06:00 crc kubenswrapper[4708]: E0227 17:06:00.144800 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef0f4977-e298-40f1-8d1d-23ebf0111f9f" containerName="oc" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.144820 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef0f4977-e298-40f1-8d1d-23ebf0111f9f" containerName="oc" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.145025 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef0f4977-e298-40f1-8d1d-23ebf0111f9f" containerName="oc" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.145668 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-n7s88" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.148572 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.148692 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.148893 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.154496 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-n7s88"] Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.312696 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wfww\" (UniqueName: \"kubernetes.io/projected/2678f124-0296-46f6-9df8-ec03bde26be0-kube-api-access-9wfww\") pod \"auto-csr-approver-29536866-n7s88\" (UID: \"2678f124-0296-46f6-9df8-ec03bde26be0\") " pod="openshift-infra/auto-csr-approver-29536866-n7s88" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.414107 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wfww\" (UniqueName: \"kubernetes.io/projected/2678f124-0296-46f6-9df8-ec03bde26be0-kube-api-access-9wfww\") pod \"auto-csr-approver-29536866-n7s88\" (UID: \"2678f124-0296-46f6-9df8-ec03bde26be0\") " pod="openshift-infra/auto-csr-approver-29536866-n7s88" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.445718 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wfww\" (UniqueName: \"kubernetes.io/projected/2678f124-0296-46f6-9df8-ec03bde26be0-kube-api-access-9wfww\") pod \"auto-csr-approver-29536866-n7s88\" (UID: \"2678f124-0296-46f6-9df8-ec03bde26be0\") " pod="openshift-infra/auto-csr-approver-29536866-n7s88" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.469542 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-n7s88" Feb 27 17:06:00 crc kubenswrapper[4708]: I0227 17:06:00.757578 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-n7s88"] Feb 27 17:06:01 crc kubenswrapper[4708]: I0227 17:06:01.406467 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536866-n7s88" event={"ID":"2678f124-0296-46f6-9df8-ec03bde26be0","Type":"ContainerStarted","Data":"a576efd0127a5b531f357886cb8e90a85da55c8a346fffadf9d676273f595f4e"} Feb 27 17:06:02 crc kubenswrapper[4708]: I0227 17:06:02.415239 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536866-n7s88" event={"ID":"2678f124-0296-46f6-9df8-ec03bde26be0","Type":"ContainerStarted","Data":"a898a6e2591ffb60664c4d93c890b80f304b9ddffd7d4a5c0e14e049f690f07c"} Feb 27 17:06:02 crc kubenswrapper[4708]: I0227 17:06:02.430654 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536866-n7s88" podStartSLOduration=1.470698597 podStartE2EDuration="2.430626453s" podCreationTimestamp="2026-02-27 17:06:00 +0000 UTC" firstStartedPulling="2026-02-27 17:06:00.768645843 +0000 UTC m=+759.284443440" lastFinishedPulling="2026-02-27 17:06:01.728573669 +0000 UTC m=+760.244371296" observedRunningTime="2026-02-27 17:06:02.430000375 +0000 UTC m=+760.945797972" watchObservedRunningTime="2026-02-27 17:06:02.430626453 +0000 UTC m=+760.946424090" Feb 27 17:06:03 crc kubenswrapper[4708]: I0227 17:06:03.424211 4708 generic.go:334] "Generic (PLEG): container finished" podID="2678f124-0296-46f6-9df8-ec03bde26be0" containerID="a898a6e2591ffb60664c4d93c890b80f304b9ddffd7d4a5c0e14e049f690f07c" exitCode=0 Feb 27 17:06:03 crc kubenswrapper[4708]: I0227 17:06:03.424287 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536866-n7s88" event={"ID":"2678f124-0296-46f6-9df8-ec03bde26be0","Type":"ContainerDied","Data":"a898a6e2591ffb60664c4d93c890b80f304b9ddffd7d4a5c0e14e049f690f07c"} Feb 27 17:06:04 crc kubenswrapper[4708]: I0227 17:06:04.742772 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-n7s88" Feb 27 17:06:04 crc kubenswrapper[4708]: I0227 17:06:04.872787 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wfww\" (UniqueName: \"kubernetes.io/projected/2678f124-0296-46f6-9df8-ec03bde26be0-kube-api-access-9wfww\") pod \"2678f124-0296-46f6-9df8-ec03bde26be0\" (UID: \"2678f124-0296-46f6-9df8-ec03bde26be0\") " Feb 27 17:06:04 crc kubenswrapper[4708]: I0227 17:06:04.881195 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2678f124-0296-46f6-9df8-ec03bde26be0-kube-api-access-9wfww" (OuterVolumeSpecName: "kube-api-access-9wfww") pod "2678f124-0296-46f6-9df8-ec03bde26be0" (UID: "2678f124-0296-46f6-9df8-ec03bde26be0"). InnerVolumeSpecName "kube-api-access-9wfww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:06:04 crc kubenswrapper[4708]: I0227 17:06:04.975017 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wfww\" (UniqueName: \"kubernetes.io/projected/2678f124-0296-46f6-9df8-ec03bde26be0-kube-api-access-9wfww\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:05 crc kubenswrapper[4708]: I0227 17:06:05.331670 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-fnz5n"] Feb 27 17:06:05 crc kubenswrapper[4708]: I0227 17:06:05.338181 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536860-fnz5n"] Feb 27 17:06:05 crc kubenswrapper[4708]: I0227 17:06:05.442183 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536866-n7s88" event={"ID":"2678f124-0296-46f6-9df8-ec03bde26be0","Type":"ContainerDied","Data":"a576efd0127a5b531f357886cb8e90a85da55c8a346fffadf9d676273f595f4e"} Feb 27 17:06:05 crc kubenswrapper[4708]: I0227 17:06:05.442237 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a576efd0127a5b531f357886cb8e90a85da55c8a346fffadf9d676273f595f4e" Feb 27 17:06:05 crc kubenswrapper[4708]: I0227 17:06:05.442271 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536866-n7s88" Feb 27 17:06:05 crc kubenswrapper[4708]: I0227 17:06:05.631820 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:06:05 crc kubenswrapper[4708]: I0227 17:06:05.631926 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:06:06 crc kubenswrapper[4708]: I0227 17:06:06.237763 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7424251f-c1f8-48a8-8de9-51b1519ccb44" path="/var/lib/kubelet/pods/7424251f-c1f8-48a8-8de9-51b1519ccb44/volumes" Feb 27 17:06:22 crc kubenswrapper[4708]: I0227 17:06:22.846805 4708 scope.go:117] "RemoveContainer" containerID="3fb7c56ad736d08f51881cbad04dd8f518cccf8fdb5151b3f1168adcad35b4d3" Feb 27 17:06:35 crc kubenswrapper[4708]: I0227 17:06:35.631663 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:06:35 crc kubenswrapper[4708]: I0227 17:06:35.632555 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.262592 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p"] Feb 27 17:06:41 crc kubenswrapper[4708]: E0227 17:06:41.263193 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2678f124-0296-46f6-9df8-ec03bde26be0" containerName="oc" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.263215 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2678f124-0296-46f6-9df8-ec03bde26be0" containerName="oc" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.263404 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2678f124-0296-46f6-9df8-ec03bde26be0" containerName="oc" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.264561 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.267007 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.285181 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p"] Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.430466 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.430636 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.430684 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxmb6\" (UniqueName: \"kubernetes.io/projected/9061e6c9-6752-4d0b-adbc-a10578e633fc-kube-api-access-mxmb6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.531664 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.531748 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxmb6\" (UniqueName: \"kubernetes.io/projected/9061e6c9-6752-4d0b-adbc-a10578e633fc-kube-api-access-mxmb6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.531839 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.532656 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.532781 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.564138 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxmb6\" (UniqueName: \"kubernetes.io/projected/9061e6c9-6752-4d0b-adbc-a10578e633fc-kube-api-access-mxmb6\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:41 crc kubenswrapper[4708]: I0227 17:06:41.588509 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:42 crc kubenswrapper[4708]: I0227 17:06:42.112790 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p"] Feb 27 17:06:42 crc kubenswrapper[4708]: I0227 17:06:42.711627 4708 generic.go:334] "Generic (PLEG): container finished" podID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerID="1f8db2d8a3edad8b80c9f5cce40226af6f785d72ec4838a9360c662c5fc89843" exitCode=0 Feb 27 17:06:42 crc kubenswrapper[4708]: I0227 17:06:42.711695 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" event={"ID":"9061e6c9-6752-4d0b-adbc-a10578e633fc","Type":"ContainerDied","Data":"1f8db2d8a3edad8b80c9f5cce40226af6f785d72ec4838a9360c662c5fc89843"} Feb 27 17:06:42 crc kubenswrapper[4708]: I0227 17:06:42.712000 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" event={"ID":"9061e6c9-6752-4d0b-adbc-a10578e633fc","Type":"ContainerStarted","Data":"ba3a568a2d6ac2b503bf74a571e5e2b26c5d7324281a4f02366221a1ee7ed386"} Feb 27 17:06:44 crc kubenswrapper[4708]: I0227 17:06:44.723712 4708 generic.go:334] "Generic (PLEG): container finished" podID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerID="cf2e402e8928f62709b195c6949dd4e9b2cf06d232b71cc30e0412950e010671" exitCode=0 Feb 27 17:06:44 crc kubenswrapper[4708]: I0227 17:06:44.723749 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" event={"ID":"9061e6c9-6752-4d0b-adbc-a10578e633fc","Type":"ContainerDied","Data":"cf2e402e8928f62709b195c6949dd4e9b2cf06d232b71cc30e0412950e010671"} Feb 27 17:06:45 crc kubenswrapper[4708]: I0227 17:06:45.734822 4708 generic.go:334] "Generic (PLEG): container finished" podID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerID="4461f5c1d8231a8bcbb1089ff77048b45164eafd009919667416205349365406" exitCode=0 Feb 27 17:06:45 crc kubenswrapper[4708]: I0227 17:06:45.734929 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" event={"ID":"9061e6c9-6752-4d0b-adbc-a10578e633fc","Type":"ContainerDied","Data":"4461f5c1d8231a8bcbb1089ff77048b45164eafd009919667416205349365406"} Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.066835 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.109305 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxmb6\" (UniqueName: \"kubernetes.io/projected/9061e6c9-6752-4d0b-adbc-a10578e633fc-kube-api-access-mxmb6\") pod \"9061e6c9-6752-4d0b-adbc-a10578e633fc\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.109402 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-bundle\") pod \"9061e6c9-6752-4d0b-adbc-a10578e633fc\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.109503 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-util\") pod \"9061e6c9-6752-4d0b-adbc-a10578e633fc\" (UID: \"9061e6c9-6752-4d0b-adbc-a10578e633fc\") " Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.114219 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-bundle" (OuterVolumeSpecName: "bundle") pod "9061e6c9-6752-4d0b-adbc-a10578e633fc" (UID: "9061e6c9-6752-4d0b-adbc-a10578e633fc"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.118348 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9061e6c9-6752-4d0b-adbc-a10578e633fc-kube-api-access-mxmb6" (OuterVolumeSpecName: "kube-api-access-mxmb6") pod "9061e6c9-6752-4d0b-adbc-a10578e633fc" (UID: "9061e6c9-6752-4d0b-adbc-a10578e633fc"). InnerVolumeSpecName "kube-api-access-mxmb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.142322 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-util" (OuterVolumeSpecName: "util") pod "9061e6c9-6752-4d0b-adbc-a10578e633fc" (UID: "9061e6c9-6752-4d0b-adbc-a10578e633fc"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.211148 4708 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-util\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.211210 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxmb6\" (UniqueName: \"kubernetes.io/projected/9061e6c9-6752-4d0b-adbc-a10578e633fc-kube-api-access-mxmb6\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.211236 4708 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9061e6c9-6752-4d0b-adbc-a10578e633fc-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.754309 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" event={"ID":"9061e6c9-6752-4d0b-adbc-a10578e633fc","Type":"ContainerDied","Data":"ba3a568a2d6ac2b503bf74a571e5e2b26c5d7324281a4f02366221a1ee7ed386"} Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.754368 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba3a568a2d6ac2b503bf74a571e5e2b26c5d7324281a4f02366221a1ee7ed386" Feb 27 17:06:47 crc kubenswrapper[4708]: I0227 17:06:47.754466 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.135891 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l82mg"] Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.136411 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-controller" containerID="cri-o://5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.136511 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-node" containerID="cri-o://3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.136509 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="nbdb" containerID="cri-o://de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.136544 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.136584 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="sbdb" containerID="cri-o://8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.136607 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-acl-logging" containerID="cri-o://5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.136567 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="northd" containerID="cri-o://c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.199370 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" containerID="cri-o://a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" gracePeriod=30 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.592561 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/3.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.594231 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovn-acl-logging/0.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.594582 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovn-controller/0.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.594907 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686729 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-var-lib-openvswitch\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686784 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-var-lib-cni-networks-ovn-kubernetes\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686804 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-log-socket\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686818 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-netns\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686866 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-env-overrides\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686880 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-systemd\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686915 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-config\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686952 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-bin\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686976 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc6tg\" (UniqueName: \"kubernetes.io/projected/7efaba13-6a00-4f12-9e83-5a66a2246554-kube-api-access-dc6tg\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687008 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-kubelet\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687039 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7efaba13-6a00-4f12-9e83-5a66a2246554-ovn-node-metrics-cert\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687054 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-systemd-units\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687084 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-ovn\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687114 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-openvswitch\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687128 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-node-log\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687166 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-etc-openvswitch\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687184 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-netd\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687202 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-ovn-kubernetes\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687240 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-script-lib\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687256 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-slash\") pod \"7efaba13-6a00-4f12-9e83-5a66a2246554\" (UID: \"7efaba13-6a00-4f12-9e83-5a66a2246554\") " Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686902 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687488 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-slash" (OuterVolumeSpecName: "host-slash") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687501 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-node-log" (OuterVolumeSpecName: "node-log") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687513 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687528 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687537 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687541 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686933 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686929 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.686950 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-log-socket" (OuterVolumeSpecName: "log-socket") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687239 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687262 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687467 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687469 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687486 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.687930 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.694322 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.704314 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.706990 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-5hk97"] Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707191 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerName="pull" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707206 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerName="pull" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707215 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707223 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707229 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kubecfg-setup" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707235 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kubecfg-setup" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707245 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerName="extract" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707251 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerName="extract" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707260 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="northd" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707265 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="northd" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707273 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="sbdb" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707280 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="sbdb" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707287 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707294 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707304 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707309 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707320 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-node" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707327 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-node" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707334 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707340 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707347 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707352 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707360 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-acl-logging" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707365 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-acl-logging" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707372 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerName="util" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707377 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerName="util" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707384 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="nbdb" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707390 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="nbdb" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707479 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-ovn-metrics" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707488 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="nbdb" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707495 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="northd" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707504 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9061e6c9-6752-4d0b-adbc-a10578e633fc" containerName="extract" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707511 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707517 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707524 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707531 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707539 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="kube-rbac-proxy-node" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707548 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovn-acl-logging" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707556 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="sbdb" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707645 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707652 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.707663 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707668 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707742 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.707750 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerName="ovnkube-controller" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.709193 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7efaba13-6a00-4f12-9e83-5a66a2246554-kube-api-access-dc6tg" (OuterVolumeSpecName: "kube-api-access-dc6tg") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "kube-api-access-dc6tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.709647 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7efaba13-6a00-4f12-9e83-5a66a2246554-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "7efaba13-6a00-4f12-9e83-5a66a2246554" (UID: "7efaba13-6a00-4f12-9e83-5a66a2246554"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.710107 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.778529 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovnkube-controller/3.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.780411 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovn-acl-logging/0.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.780799 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-l82mg_7efaba13-6a00-4f12-9e83-5a66a2246554/ovn-controller/0.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787545 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" exitCode=0 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787587 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" exitCode=0 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787598 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" exitCode=0 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787607 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" exitCode=0 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787616 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" exitCode=0 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787626 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" exitCode=0 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787643 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" exitCode=143 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787650 4708 generic.go:334] "Generic (PLEG): container finished" podID="7efaba13-6a00-4f12-9e83-5a66a2246554" containerID="5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" exitCode=143 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.787892 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788445 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvhg9\" (UniqueName: \"kubernetes.io/projected/ffe566ec-2ad3-4695-95ff-dff1609f9820-kube-api-access-hvhg9\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788493 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-systemd-units\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788530 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-log-socket\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788547 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovnkube-config\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788566 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788576 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788603 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-systemd\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788609 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788624 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovnkube-script-lib\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788636 4708 scope.go:117] "RemoveContainer" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788652 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-cni-netd\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788673 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-env-overrides\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788692 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-var-lib-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788625 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788758 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788772 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788790 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788801 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788811 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788817 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788823 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788828 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788834 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788839 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788847 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788853 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788878 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788889 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788895 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788901 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788907 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788913 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788919 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788924 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788930 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788935 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788942 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788949 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788960 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788966 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788972 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788978 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788983 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788989 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.788994 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789000 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789007 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789012 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789020 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-l82mg" event={"ID":"7efaba13-6a00-4f12-9e83-5a66a2246554","Type":"ContainerDied","Data":"a2ac0b2b7356518d9bce46ade5ea9cc63686575e3580ef47fe4b0f4b75113091"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789028 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789035 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789043 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789045 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789048 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789712 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789719 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789725 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789729 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789735 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789740 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789814 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-run-netns\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789854 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-slash\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789945 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-run-ovn-kubernetes\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.789990 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-ovn\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790018 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-cni-bin\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790036 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-kubelet\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790064 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-node-log\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790089 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-etc-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790107 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovn-node-metrics-cert\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790198 4708 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790216 4708 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790232 4708 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-log-socket\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790242 4708 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-netns\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790252 4708 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-env-overrides\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790261 4708 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-systemd\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790272 4708 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790282 4708 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-bin\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790291 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dc6tg\" (UniqueName: \"kubernetes.io/projected/7efaba13-6a00-4f12-9e83-5a66a2246554-kube-api-access-dc6tg\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790301 4708 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-kubelet\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790311 4708 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7efaba13-6a00-4f12-9e83-5a66a2246554-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790320 4708 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-systemd-units\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790329 4708 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790339 4708 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-run-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790349 4708 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-node-log\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790359 4708 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790368 4708 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-cni-netd\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790379 4708 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790389 4708 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7efaba13-6a00-4f12-9e83-5a66a2246554-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.790397 4708 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7efaba13-6a00-4f12-9e83-5a66a2246554-host-slash\") on node \"crc\" DevicePath \"\"" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.796873 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/2.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.797464 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/1.log" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.797491 4708 generic.go:334] "Generic (PLEG): container finished" podID="2c5353a5-c388-4046-bb29-8e73352588c2" containerID="55659c02564f28b8a0ba82f59d00103ed6e35b22ac47d4fc894c18e3333ba85f" exitCode=2 Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.797507 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerDied","Data":"55659c02564f28b8a0ba82f59d00103ed6e35b22ac47d4fc894c18e3333ba85f"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.797527 4708 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7"} Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.797959 4708 scope.go:117] "RemoveContainer" containerID="55659c02564f28b8a0ba82f59d00103ed6e35b22ac47d4fc894c18e3333ba85f" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.798183 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-p6n6j_openshift-multus(2c5353a5-c388-4046-bb29-8e73352588c2)\"" pod="openshift-multus/multus-p6n6j" podUID="2c5353a5-c388-4046-bb29-8e73352588c2" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.808997 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.843976 4708 scope.go:117] "RemoveContainer" containerID="8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.859981 4708 scope.go:117] "RemoveContainer" containerID="de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.861568 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l82mg"] Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.882021 4708 scope.go:117] "RemoveContainer" containerID="c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.882326 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-l82mg"] Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891707 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-ovn\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891749 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-cni-bin\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891768 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-kubelet\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891784 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-node-log\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891802 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-etc-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891821 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovn-node-metrics-cert\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891841 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvhg9\" (UniqueName: \"kubernetes.io/projected/ffe566ec-2ad3-4695-95ff-dff1609f9820-kube-api-access-hvhg9\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891870 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-systemd-units\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891895 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-log-socket\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891909 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovnkube-config\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891932 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891949 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-systemd\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891963 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovnkube-script-lib\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.891993 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-cni-netd\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892009 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-env-overrides\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-var-lib-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892059 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892075 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-run-netns\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892090 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-slash\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892106 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-run-ovn-kubernetes\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892170 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-run-ovn-kubernetes\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892204 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-ovn\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892225 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-cni-bin\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892243 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-kubelet\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892260 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-node-log\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892278 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-etc-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.892993 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-systemd-units\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893016 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-cni-netd\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893037 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-log-socket\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893394 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-env-overrides\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893432 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-var-lib-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893457 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-openvswitch\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893564 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovnkube-config\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893602 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893626 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-run-systemd\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893835 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-run-netns\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.893905 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ffe566ec-2ad3-4695-95ff-dff1609f9820-host-slash\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.894038 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovnkube-script-lib\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.895794 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ffe566ec-2ad3-4695-95ff-dff1609f9820-ovn-node-metrics-cert\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.900757 4708 scope.go:117] "RemoveContainer" containerID="a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.928020 4708 scope.go:117] "RemoveContainer" containerID="3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.931537 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvhg9\" (UniqueName: \"kubernetes.io/projected/ffe566ec-2ad3-4695-95ff-dff1609f9820-kube-api-access-hvhg9\") pod \"ovnkube-node-5hk97\" (UID: \"ffe566ec-2ad3-4695-95ff-dff1609f9820\") " pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.948306 4708 scope.go:117] "RemoveContainer" containerID="5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.965056 4708 scope.go:117] "RemoveContainer" containerID="5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.984389 4708 scope.go:117] "RemoveContainer" containerID="98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.996965 4708 scope.go:117] "RemoveContainer" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.997310 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": container with ID starting with a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914 not found: ID does not exist" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.997354 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} err="failed to get container status \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": rpc error: code = NotFound desc = could not find container \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": container with ID starting with a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914 not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.997373 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.997637 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": container with ID starting with 408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed not found: ID does not exist" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.997657 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} err="failed to get container status \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": rpc error: code = NotFound desc = could not find container \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": container with ID starting with 408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.997710 4708 scope.go:117] "RemoveContainer" containerID="8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.997955 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": container with ID starting with 8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963 not found: ID does not exist" containerID="8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.998003 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} err="failed to get container status \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": rpc error: code = NotFound desc = could not find container \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": container with ID starting with 8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963 not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.998016 4708 scope.go:117] "RemoveContainer" containerID="de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.998332 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": container with ID starting with de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e not found: ID does not exist" containerID="de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.998374 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} err="failed to get container status \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": rpc error: code = NotFound desc = could not find container \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": container with ID starting with de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.998403 4708 scope.go:117] "RemoveContainer" containerID="c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.998686 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": container with ID starting with c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb not found: ID does not exist" containerID="c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.998708 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} err="failed to get container status \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": rpc error: code = NotFound desc = could not find container \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": container with ID starting with c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.998724 4708 scope.go:117] "RemoveContainer" containerID="a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.999016 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": container with ID starting with a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16 not found: ID does not exist" containerID="a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.999038 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} err="failed to get container status \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": rpc error: code = NotFound desc = could not find container \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": container with ID starting with a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16 not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.999088 4708 scope.go:117] "RemoveContainer" containerID="3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.999300 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": container with ID starting with 3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779 not found: ID does not exist" containerID="3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.999320 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} err="failed to get container status \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": rpc error: code = NotFound desc = could not find container \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": container with ID starting with 3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779 not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.999346 4708 scope.go:117] "RemoveContainer" containerID="5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.999634 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": container with ID starting with 5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a not found: ID does not exist" containerID="5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.999655 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} err="failed to get container status \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": rpc error: code = NotFound desc = could not find container \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": container with ID starting with 5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a not found: ID does not exist" Feb 27 17:06:52 crc kubenswrapper[4708]: I0227 17:06:52.999701 4708 scope.go:117] "RemoveContainer" containerID="5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" Feb 27 17:06:52 crc kubenswrapper[4708]: E0227 17:06:52.999930 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": container with ID starting with 5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b not found: ID does not exist" containerID="5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:52.999972 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} err="failed to get container status \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": rpc error: code = NotFound desc = could not find container \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": container with ID starting with 5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:52.999986 4708 scope.go:117] "RemoveContainer" containerID="98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b" Feb 27 17:06:53 crc kubenswrapper[4708]: E0227 17:06:53.000261 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": container with ID starting with 98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b not found: ID does not exist" containerID="98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.000295 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} err="failed to get container status \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": rpc error: code = NotFound desc = could not find container \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": container with ID starting with 98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.000315 4708 scope.go:117] "RemoveContainer" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.000590 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} err="failed to get container status \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": rpc error: code = NotFound desc = could not find container \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": container with ID starting with a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.000612 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.001360 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} err="failed to get container status \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": rpc error: code = NotFound desc = could not find container \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": container with ID starting with 408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.001381 4708 scope.go:117] "RemoveContainer" containerID="8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.001631 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} err="failed to get container status \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": rpc error: code = NotFound desc = could not find container \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": container with ID starting with 8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.001668 4708 scope.go:117] "RemoveContainer" containerID="de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.001919 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} err="failed to get container status \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": rpc error: code = NotFound desc = could not find container \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": container with ID starting with de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.001941 4708 scope.go:117] "RemoveContainer" containerID="c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.002175 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} err="failed to get container status \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": rpc error: code = NotFound desc = could not find container \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": container with ID starting with c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.002195 4708 scope.go:117] "RemoveContainer" containerID="a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.006238 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} err="failed to get container status \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": rpc error: code = NotFound desc = could not find container \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": container with ID starting with a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.006295 4708 scope.go:117] "RemoveContainer" containerID="3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.007095 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} err="failed to get container status \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": rpc error: code = NotFound desc = could not find container \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": container with ID starting with 3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.007134 4708 scope.go:117] "RemoveContainer" containerID="5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.007358 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} err="failed to get container status \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": rpc error: code = NotFound desc = could not find container \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": container with ID starting with 5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.007380 4708 scope.go:117] "RemoveContainer" containerID="5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.007636 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} err="failed to get container status \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": rpc error: code = NotFound desc = could not find container \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": container with ID starting with 5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.007686 4708 scope.go:117] "RemoveContainer" containerID="98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.007955 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} err="failed to get container status \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": rpc error: code = NotFound desc = could not find container \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": container with ID starting with 98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.008004 4708 scope.go:117] "RemoveContainer" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.008246 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} err="failed to get container status \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": rpc error: code = NotFound desc = could not find container \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": container with ID starting with a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.008292 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.008524 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} err="failed to get container status \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": rpc error: code = NotFound desc = could not find container \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": container with ID starting with 408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.008591 4708 scope.go:117] "RemoveContainer" containerID="8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.012117 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} err="failed to get container status \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": rpc error: code = NotFound desc = could not find container \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": container with ID starting with 8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.012143 4708 scope.go:117] "RemoveContainer" containerID="de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.012409 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} err="failed to get container status \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": rpc error: code = NotFound desc = could not find container \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": container with ID starting with de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.012438 4708 scope.go:117] "RemoveContainer" containerID="c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.012671 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} err="failed to get container status \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": rpc error: code = NotFound desc = could not find container \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": container with ID starting with c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.012733 4708 scope.go:117] "RemoveContainer" containerID="a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.012976 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} err="failed to get container status \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": rpc error: code = NotFound desc = could not find container \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": container with ID starting with a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.013017 4708 scope.go:117] "RemoveContainer" containerID="3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.013257 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} err="failed to get container status \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": rpc error: code = NotFound desc = could not find container \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": container with ID starting with 3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.013283 4708 scope.go:117] "RemoveContainer" containerID="5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.013586 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} err="failed to get container status \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": rpc error: code = NotFound desc = could not find container \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": container with ID starting with 5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.013638 4708 scope.go:117] "RemoveContainer" containerID="5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.013836 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} err="failed to get container status \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": rpc error: code = NotFound desc = could not find container \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": container with ID starting with 5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.013877 4708 scope.go:117] "RemoveContainer" containerID="98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.014228 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} err="failed to get container status \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": rpc error: code = NotFound desc = could not find container \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": container with ID starting with 98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.014253 4708 scope.go:117] "RemoveContainer" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.014502 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} err="failed to get container status \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": rpc error: code = NotFound desc = could not find container \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": container with ID starting with a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.014520 4708 scope.go:117] "RemoveContainer" containerID="408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.014760 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed"} err="failed to get container status \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": rpc error: code = NotFound desc = could not find container \"408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed\": container with ID starting with 408fb5efa8d07cb2dc546be45d74541437915388683390674ea1bfce28ae2aed not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.014804 4708 scope.go:117] "RemoveContainer" containerID="8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015071 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963"} err="failed to get container status \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": rpc error: code = NotFound desc = could not find container \"8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963\": container with ID starting with 8cb96be322c7e476e42859c862864c94299265e4df07912ac50f38b7b986c963 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015097 4708 scope.go:117] "RemoveContainer" containerID="de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015341 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e"} err="failed to get container status \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": rpc error: code = NotFound desc = could not find container \"de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e\": container with ID starting with de5862932075a07d2af52cf67f7043158651f40d17878d85c8b856dab1f1504e not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015360 4708 scope.go:117] "RemoveContainer" containerID="c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015589 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb"} err="failed to get container status \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": rpc error: code = NotFound desc = could not find container \"c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb\": container with ID starting with c7fbf74eb844e11ce5503563caccfb6d4702f25c7c0b343ba1cef4b45d6282fb not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015605 4708 scope.go:117] "RemoveContainer" containerID="a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015825 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16"} err="failed to get container status \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": rpc error: code = NotFound desc = could not find container \"a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16\": container with ID starting with a490019246a8d7e26eef3583d8fe76a2eaf2147a9ec7bd0e810f70fc6f2dca16 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.015881 4708 scope.go:117] "RemoveContainer" containerID="3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.016122 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779"} err="failed to get container status \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": rpc error: code = NotFound desc = could not find container \"3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779\": container with ID starting with 3b40a1e1937a8cc7a86c0c8bacfbea5f5f95a8242378aa8ea8a3f3f23fe0e779 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.016141 4708 scope.go:117] "RemoveContainer" containerID="5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.020061 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a"} err="failed to get container status \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": rpc error: code = NotFound desc = could not find container \"5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a\": container with ID starting with 5550a4a847096a4a6b62537a29edf35bd42851576707d373faa1d659b335f25a not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.020122 4708 scope.go:117] "RemoveContainer" containerID="5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.020385 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b"} err="failed to get container status \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": rpc error: code = NotFound desc = could not find container \"5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b\": container with ID starting with 5c270ce6ff53e288e0071eaec6228aa10fea76b57438bc22272ab8799d256f1b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.020404 4708 scope.go:117] "RemoveContainer" containerID="98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.020666 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b"} err="failed to get container status \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": rpc error: code = NotFound desc = could not find container \"98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b\": container with ID starting with 98810a4da5b5b94cd4ef4b7c895749178e0c66ec1de9e4ade65ab1510f37dd7b not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.020694 4708 scope.go:117] "RemoveContainer" containerID="a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.020951 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914"} err="failed to get container status \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": rpc error: code = NotFound desc = could not find container \"a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914\": container with ID starting with a01ee7998df8863fa9899a3f208a18007475d82fbc894d41a8805eb5dc0b6914 not found: ID does not exist" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.032594 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.803776 4708 generic.go:334] "Generic (PLEG): container finished" podID="ffe566ec-2ad3-4695-95ff-dff1609f9820" containerID="85f3ea5d4c5028668e5e423c009dadbf2a02b013a39c253636b46f82f2f58400" exitCode=0 Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.803810 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerDied","Data":"85f3ea5d4c5028668e5e423c009dadbf2a02b013a39c253636b46f82f2f58400"} Feb 27 17:06:53 crc kubenswrapper[4708]: I0227 17:06:53.803831 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"806a95b6a876367de23c1ff8f6fb8114611dc0e56a6891bd614a2e019d8835a4"} Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.233941 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7efaba13-6a00-4f12-9e83-5a66a2246554" path="/var/lib/kubelet/pods/7efaba13-6a00-4f12-9e83-5a66a2246554/volumes" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.606888 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm"] Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.607687 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.609914 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.610230 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-94887" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.610405 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.712019 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5gzf\" (UniqueName: \"kubernetes.io/projected/33badcb1-2622-423f-afe6-482b92342910-kube-api-access-h5gzf\") pod \"obo-prometheus-operator-68bc856cb9-mnthm\" (UID: \"33badcb1-2622-423f-afe6-482b92342910\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.724307 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx"] Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.724907 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.727445 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-4tftk" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.727801 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.735498 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs"] Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.736027 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.811535 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"41aed547d2ec6909aff69052bbb71670badc7589b21d49e53a6de422ae8b4c95"} Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.811581 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"94213c53c78cb0004ba88e5e7e4cabe137977e097847228b39f5d965880bed31"} Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.811595 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"0a2e4166317817f7120d713614055af18ac741c83705690e1656026ee201851e"} Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.811605 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"a970aa7d59b75414f802021c470c3fb8c0f5c8b2868181861b9cb3f8a6caec9c"} Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.811614 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"44fa5e532cf1ccf22c656191c4f19d01a784639775533127b2fa42f76fb63621"} Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.811623 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"a35120c2972eba37dfcd7c2463285082dd6e0cfa1401cd19657f97455c2ffa66"} Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.813210 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e46fb234-1f2b-4217-b76b-0e2900d525da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx\" (UID: \"e46fb234-1f2b-4217-b76b-0e2900d525da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.813273 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e46fb234-1f2b-4217-b76b-0e2900d525da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx\" (UID: \"e46fb234-1f2b-4217-b76b-0e2900d525da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.813333 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5gzf\" (UniqueName: \"kubernetes.io/projected/33badcb1-2622-423f-afe6-482b92342910-kube-api-access-h5gzf\") pod \"obo-prometheus-operator-68bc856cb9-mnthm\" (UID: \"33badcb1-2622-423f-afe6-482b92342910\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.813366 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b0277f7-658b-4897-b034-9aab6cacc59e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs\" (UID: \"3b0277f7-658b-4897-b034-9aab6cacc59e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.813388 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b0277f7-658b-4897-b034-9aab6cacc59e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs\" (UID: \"3b0277f7-658b-4897-b034-9aab6cacc59e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.830354 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5gzf\" (UniqueName: \"kubernetes.io/projected/33badcb1-2622-423f-afe6-482b92342910-kube-api-access-h5gzf\") pod \"obo-prometheus-operator-68bc856cb9-mnthm\" (UID: \"33badcb1-2622-423f-afe6-482b92342910\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.841574 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-x7wsw"] Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.842271 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.844517 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-zftqs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.845161 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.914129 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e46fb234-1f2b-4217-b76b-0e2900d525da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx\" (UID: \"e46fb234-1f2b-4217-b76b-0e2900d525da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.914190 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f03daf2a-7ba1-454e-a2fd-dd2e12631679-observability-operator-tls\") pod \"observability-operator-59bdc8b94-x7wsw\" (UID: \"f03daf2a-7ba1-454e-a2fd-dd2e12631679\") " pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.914235 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b0277f7-658b-4897-b034-9aab6cacc59e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs\" (UID: \"3b0277f7-658b-4897-b034-9aab6cacc59e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.914252 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b0277f7-658b-4897-b034-9aab6cacc59e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs\" (UID: \"3b0277f7-658b-4897-b034-9aab6cacc59e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.914279 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jssc4\" (UniqueName: \"kubernetes.io/projected/f03daf2a-7ba1-454e-a2fd-dd2e12631679-kube-api-access-jssc4\") pod \"observability-operator-59bdc8b94-x7wsw\" (UID: \"f03daf2a-7ba1-454e-a2fd-dd2e12631679\") " pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.914316 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e46fb234-1f2b-4217-b76b-0e2900d525da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx\" (UID: \"e46fb234-1f2b-4217-b76b-0e2900d525da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.917501 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3b0277f7-658b-4897-b034-9aab6cacc59e-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs\" (UID: \"3b0277f7-658b-4897-b034-9aab6cacc59e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.918802 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3b0277f7-658b-4897-b034-9aab6cacc59e-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs\" (UID: \"3b0277f7-658b-4897-b034-9aab6cacc59e\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.919080 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e46fb234-1f2b-4217-b76b-0e2900d525da-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx\" (UID: \"e46fb234-1f2b-4217-b76b-0e2900d525da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.919138 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e46fb234-1f2b-4217-b76b-0e2900d525da-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx\" (UID: \"e46fb234-1f2b-4217-b76b-0e2900d525da\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.920365 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.922874 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2qs44"] Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.923485 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:54 crc kubenswrapper[4708]: I0227 17:06:54.932323 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-d2ftd" Feb 27 17:06:54 crc kubenswrapper[4708]: E0227 17:06:54.970393 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(3a88e98cc22dc26da91eb0356dd0a17e3cbbffa66259a333b9b1479ca37b123c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:06:54 crc kubenswrapper[4708]: E0227 17:06:54.970668 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(3a88e98cc22dc26da91eb0356dd0a17e3cbbffa66259a333b9b1479ca37b123c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:06:54 crc kubenswrapper[4708]: E0227 17:06:54.970691 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(3a88e98cc22dc26da91eb0356dd0a17e3cbbffa66259a333b9b1479ca37b123c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:06:54 crc kubenswrapper[4708]: E0227 17:06:54.970732 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators(33badcb1-2622-423f-afe6-482b92342910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators(33badcb1-2622-423f-afe6-482b92342910)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(3a88e98cc22dc26da91eb0356dd0a17e3cbbffa66259a333b9b1479ca37b123c): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" podUID="33badcb1-2622-423f-afe6-482b92342910" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.015346 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvfgw\" (UniqueName: \"kubernetes.io/projected/5c087cb8-7024-4186-9e20-5620cdb2fd9a-kube-api-access-hvfgw\") pod \"perses-operator-5bf474d74f-2qs44\" (UID: \"5c087cb8-7024-4186-9e20-5620cdb2fd9a\") " pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.015403 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f03daf2a-7ba1-454e-a2fd-dd2e12631679-observability-operator-tls\") pod \"observability-operator-59bdc8b94-x7wsw\" (UID: \"f03daf2a-7ba1-454e-a2fd-dd2e12631679\") " pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.015443 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jssc4\" (UniqueName: \"kubernetes.io/projected/f03daf2a-7ba1-454e-a2fd-dd2e12631679-kube-api-access-jssc4\") pod \"observability-operator-59bdc8b94-x7wsw\" (UID: \"f03daf2a-7ba1-454e-a2fd-dd2e12631679\") " pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.015458 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c087cb8-7024-4186-9e20-5620cdb2fd9a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2qs44\" (UID: \"5c087cb8-7024-4186-9e20-5620cdb2fd9a\") " pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.018546 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/f03daf2a-7ba1-454e-a2fd-dd2e12631679-observability-operator-tls\") pod \"observability-operator-59bdc8b94-x7wsw\" (UID: \"f03daf2a-7ba1-454e-a2fd-dd2e12631679\") " pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.033193 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jssc4\" (UniqueName: \"kubernetes.io/projected/f03daf2a-7ba1-454e-a2fd-dd2e12631679-kube-api-access-jssc4\") pod \"observability-operator-59bdc8b94-x7wsw\" (UID: \"f03daf2a-7ba1-454e-a2fd-dd2e12631679\") " pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.038340 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.051370 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.055024 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(3415d80f7f357e01771dbee046ea9a64f46c9e541350f00e06e690d1434c27f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.055081 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(3415d80f7f357e01771dbee046ea9a64f46c9e541350f00e06e690d1434c27f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.055101 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(3415d80f7f357e01771dbee046ea9a64f46c9e541350f00e06e690d1434c27f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.055148 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators(e46fb234-1f2b-4217-b76b-0e2900d525da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators(e46fb234-1f2b-4217-b76b-0e2900d525da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(3415d80f7f357e01771dbee046ea9a64f46c9e541350f00e06e690d1434c27f5): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" podUID="e46fb234-1f2b-4217-b76b-0e2900d525da" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.074447 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(8a77a454a774b5e240f836ab7f23ce51e16869dcc83b615a30f7930331e0880d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.074507 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(8a77a454a774b5e240f836ab7f23ce51e16869dcc83b615a30f7930331e0880d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.074532 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(8a77a454a774b5e240f836ab7f23ce51e16869dcc83b615a30f7930331e0880d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.074585 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators(3b0277f7-658b-4897-b034-9aab6cacc59e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators(3b0277f7-658b-4897-b034-9aab6cacc59e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(8a77a454a774b5e240f836ab7f23ce51e16869dcc83b615a30f7930331e0880d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" podUID="3b0277f7-658b-4897-b034-9aab6cacc59e" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.117029 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvfgw\" (UniqueName: \"kubernetes.io/projected/5c087cb8-7024-4186-9e20-5620cdb2fd9a-kube-api-access-hvfgw\") pod \"perses-operator-5bf474d74f-2qs44\" (UID: \"5c087cb8-7024-4186-9e20-5620cdb2fd9a\") " pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.117106 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c087cb8-7024-4186-9e20-5620cdb2fd9a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2qs44\" (UID: \"5c087cb8-7024-4186-9e20-5620cdb2fd9a\") " pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.117795 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5c087cb8-7024-4186-9e20-5620cdb2fd9a-openshift-service-ca\") pod \"perses-operator-5bf474d74f-2qs44\" (UID: \"5c087cb8-7024-4186-9e20-5620cdb2fd9a\") " pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.137598 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvfgw\" (UniqueName: \"kubernetes.io/projected/5c087cb8-7024-4186-9e20-5620cdb2fd9a-kube-api-access-hvfgw\") pod \"perses-operator-5bf474d74f-2qs44\" (UID: \"5c087cb8-7024-4186-9e20-5620cdb2fd9a\") " pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.161495 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.188901 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(8ce8f14b53868223c9e50f1fb2b7290ac6bd7cd7cef6f43b5e8a343cb6c909e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.188970 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(8ce8f14b53868223c9e50f1fb2b7290ac6bd7cd7cef6f43b5e8a343cb6c909e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.188994 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(8ce8f14b53868223c9e50f1fb2b7290ac6bd7cd7cef6f43b5e8a343cb6c909e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.189047 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-x7wsw_openshift-operators(f03daf2a-7ba1-454e-a2fd-dd2e12631679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-x7wsw_openshift-operators(f03daf2a-7ba1-454e-a2fd-dd2e12631679)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(8ce8f14b53868223c9e50f1fb2b7290ac6bd7cd7cef6f43b5e8a343cb6c909e7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" podUID="f03daf2a-7ba1-454e-a2fd-dd2e12631679" Feb 27 17:06:55 crc kubenswrapper[4708]: I0227 17:06:55.269888 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.295087 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(5959afde14e46b63e7c612014e092c20dbe8ec5221e39704e6b3fc3a9bccce67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.295170 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(5959afde14e46b63e7c612014e092c20dbe8ec5221e39704e6b3fc3a9bccce67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.295212 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(5959afde14e46b63e7c612014e092c20dbe8ec5221e39704e6b3fc3a9bccce67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:06:55 crc kubenswrapper[4708]: E0227 17:06:55.295281 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-2qs44_openshift-operators(5c087cb8-7024-4186-9e20-5620cdb2fd9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-2qs44_openshift-operators(5c087cb8-7024-4186-9e20-5620cdb2fd9a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(5959afde14e46b63e7c612014e092c20dbe8ec5221e39704e6b3fc3a9bccce67): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" podUID="5c087cb8-7024-4186-9e20-5620cdb2fd9a" Feb 27 17:06:56 crc kubenswrapper[4708]: I0227 17:06:56.825356 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"db4be04bf0ffeef62477cf27db133c821ab8d51994b9383b421a5fd8571afd01"} Feb 27 17:06:59 crc kubenswrapper[4708]: I0227 17:06:59.845159 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" event={"ID":"ffe566ec-2ad3-4695-95ff-dff1609f9820","Type":"ContainerStarted","Data":"9d2f1ab4f0b0b4bd170ab89f927410cc4f20036ac171584e48e861f076dcf0f9"} Feb 27 17:06:59 crc kubenswrapper[4708]: I0227 17:06:59.846340 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:59 crc kubenswrapper[4708]: I0227 17:06:59.846431 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:59 crc kubenswrapper[4708]: I0227 17:06:59.846487 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:59 crc kubenswrapper[4708]: I0227 17:06:59.872446 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" podStartSLOduration=7.872429952 podStartE2EDuration="7.872429952s" podCreationTimestamp="2026-02-27 17:06:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:06:59.869441836 +0000 UTC m=+818.385239423" watchObservedRunningTime="2026-02-27 17:06:59.872429952 +0000 UTC m=+818.388227539" Feb 27 17:06:59 crc kubenswrapper[4708]: I0227 17:06:59.872666 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:06:59 crc kubenswrapper[4708]: I0227 17:06:59.876918 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.030344 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm"] Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.030463 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.030922 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.034939 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs"] Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.035070 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.035952 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.057433 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2qs44"] Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.057816 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.058486 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.061527 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(53ddad25599123645e32d7b8134925b960bd3c48d3c6181b71015770c657e62d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.061599 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(53ddad25599123645e32d7b8134925b960bd3c48d3c6181b71015770c657e62d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.061628 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(53ddad25599123645e32d7b8134925b960bd3c48d3c6181b71015770c657e62d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.061679 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators(33badcb1-2622-423f-afe6-482b92342910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators(33badcb1-2622-423f-afe6-482b92342910)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-68bc856cb9-mnthm_openshift-operators_33badcb1-2622-423f-afe6-482b92342910_0(53ddad25599123645e32d7b8134925b960bd3c48d3c6181b71015770c657e62d): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" podUID="33badcb1-2622-423f-afe6-482b92342910" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.087131 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(3076dbdbc4aaa82b6d34652ea7c8b066ac275f7b91cd61082d43b7b70b667abe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.087192 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(3076dbdbc4aaa82b6d34652ea7c8b066ac275f7b91cd61082d43b7b70b667abe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.087210 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(3076dbdbc4aaa82b6d34652ea7c8b066ac275f7b91cd61082d43b7b70b667abe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.087260 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators(3b0277f7-658b-4897-b034-9aab6cacc59e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators(3b0277f7-658b-4897-b034-9aab6cacc59e)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_openshift-operators_3b0277f7-658b-4897-b034-9aab6cacc59e_0(3076dbdbc4aaa82b6d34652ea7c8b066ac275f7b91cd61082d43b7b70b667abe): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" podUID="3b0277f7-658b-4897-b034-9aab6cacc59e" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.093072 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(b5e83fe8c09fd39207620ae7caa16e694edf47838d1fe687f76e03574df39d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.093139 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(b5e83fe8c09fd39207620ae7caa16e694edf47838d1fe687f76e03574df39d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.093160 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(b5e83fe8c09fd39207620ae7caa16e694edf47838d1fe687f76e03574df39d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.093207 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"perses-operator-5bf474d74f-2qs44_openshift-operators(5c087cb8-7024-4186-9e20-5620cdb2fd9a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"perses-operator-5bf474d74f-2qs44_openshift-operators(5c087cb8-7024-4186-9e20-5620cdb2fd9a)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_perses-operator-5bf474d74f-2qs44_openshift-operators_5c087cb8-7024-4186-9e20-5620cdb2fd9a_0(b5e83fe8c09fd39207620ae7caa16e694edf47838d1fe687f76e03574df39d0b): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" podUID="5c087cb8-7024-4186-9e20-5620cdb2fd9a" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.108912 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx"] Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.109065 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.109476 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.115909 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-x7wsw"] Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.116964 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:00 crc kubenswrapper[4708]: I0227 17:07:00.117407 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.155731 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(6abd871c474b6b53ae326b04d69b58e5679b318489e84bf3c80dc75c9555e7d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.155805 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(6abd871c474b6b53ae326b04d69b58e5679b318489e84bf3c80dc75c9555e7d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.155833 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(6abd871c474b6b53ae326b04d69b58e5679b318489e84bf3c80dc75c9555e7d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.155930 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators(e46fb234-1f2b-4217-b76b-0e2900d525da)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators(e46fb234-1f2b-4217-b76b-0e2900d525da)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_openshift-operators_e46fb234-1f2b-4217-b76b-0e2900d525da_0(6abd871c474b6b53ae326b04d69b58e5679b318489e84bf3c80dc75c9555e7d4): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" podUID="e46fb234-1f2b-4217-b76b-0e2900d525da" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.172830 4708 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(b9f6810dd25feb03788a8d4afc71afa7a1425c096da111ef8c3b1b7012dbc419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.172912 4708 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(b9f6810dd25feb03788a8d4afc71afa7a1425c096da111ef8c3b1b7012dbc419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.172933 4708 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(b9f6810dd25feb03788a8d4afc71afa7a1425c096da111ef8c3b1b7012dbc419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:00 crc kubenswrapper[4708]: E0227 17:07:00.172979 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"observability-operator-59bdc8b94-x7wsw_openshift-operators(f03daf2a-7ba1-454e-a2fd-dd2e12631679)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"observability-operator-59bdc8b94-x7wsw_openshift-operators(f03daf2a-7ba1-454e-a2fd-dd2e12631679)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_observability-operator-59bdc8b94-x7wsw_openshift-operators_f03daf2a-7ba1-454e-a2fd-dd2e12631679_0(b9f6810dd25feb03788a8d4afc71afa7a1425c096da111ef8c3b1b7012dbc419): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" podUID="f03daf2a-7ba1-454e-a2fd-dd2e12631679" Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.631317 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.631588 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.631646 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.632202 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"73433f85f32a02d199ead494dd30f304e263e12a457d51cac8315ed1c3121a5b"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.632253 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://73433f85f32a02d199ead494dd30f304e263e12a457d51cac8315ed1c3121a5b" gracePeriod=600 Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.882684 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="73433f85f32a02d199ead494dd30f304e263e12a457d51cac8315ed1c3121a5b" exitCode=0 Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.882741 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"73433f85f32a02d199ead494dd30f304e263e12a457d51cac8315ed1c3121a5b"} Feb 27 17:07:05 crc kubenswrapper[4708]: I0227 17:07:05.882795 4708 scope.go:117] "RemoveContainer" containerID="e83f2b773e936bb9a13f65de3b8f952c3975449adc00a4f209961e8bb7a647c2" Feb 27 17:07:06 crc kubenswrapper[4708]: I0227 17:07:06.891079 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"1b93b6ea88dbf15ec38dc361eee21fbc69cdb9df7c63344796e2852a98085a90"} Feb 27 17:07:07 crc kubenswrapper[4708]: I0227 17:07:07.230555 4708 scope.go:117] "RemoveContainer" containerID="55659c02564f28b8a0ba82f59d00103ed6e35b22ac47d4fc894c18e3333ba85f" Feb 27 17:07:07 crc kubenswrapper[4708]: I0227 17:07:07.898654 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/2.log" Feb 27 17:07:07 crc kubenswrapper[4708]: I0227 17:07:07.899431 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/1.log" Feb 27 17:07:07 crc kubenswrapper[4708]: I0227 17:07:07.899565 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-p6n6j" event={"ID":"2c5353a5-c388-4046-bb29-8e73352588c2","Type":"ContainerStarted","Data":"fa0bba8f998a052d03998584e9b305c10e4be3d616f861986818e50bf06ac51b"} Feb 27 17:07:11 crc kubenswrapper[4708]: I0227 17:07:11.227932 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:07:11 crc kubenswrapper[4708]: I0227 17:07:11.228592 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" Feb 27 17:07:11 crc kubenswrapper[4708]: I0227 17:07:11.739563 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm"] Feb 27 17:07:11 crc kubenswrapper[4708]: W0227 17:07:11.751485 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33badcb1_2622_423f_afe6_482b92342910.slice/crio-fa9add4b9a1cba22ac3fe6769b686290fddbb5201f96d773377d753b513464fa WatchSource:0}: Error finding container fa9add4b9a1cba22ac3fe6769b686290fddbb5201f96d773377d753b513464fa: Status 404 returned error can't find the container with id fa9add4b9a1cba22ac3fe6769b686290fddbb5201f96d773377d753b513464fa Feb 27 17:07:11 crc kubenswrapper[4708]: I0227 17:07:11.928587 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" event={"ID":"33badcb1-2622-423f-afe6-482b92342910","Type":"ContainerStarted","Data":"fa9add4b9a1cba22ac3fe6769b686290fddbb5201f96d773377d753b513464fa"} Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.230874 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.231078 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.231800 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.231813 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.587793 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs"] Feb 27 17:07:12 crc kubenswrapper[4708]: W0227 17:07:12.594767 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b0277f7_658b_4897_b034_9aab6cacc59e.slice/crio-310b29f70e00aa077221b4a066ca7e89d7ad5f1b52b545f96a8c90f202d19f10 WatchSource:0}: Error finding container 310b29f70e00aa077221b4a066ca7e89d7ad5f1b52b545f96a8c90f202d19f10: Status 404 returned error can't find the container with id 310b29f70e00aa077221b4a066ca7e89d7ad5f1b52b545f96a8c90f202d19f10 Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.740941 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-2qs44"] Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.935698 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" event={"ID":"3b0277f7-658b-4897-b034-9aab6cacc59e","Type":"ContainerStarted","Data":"310b29f70e00aa077221b4a066ca7e89d7ad5f1b52b545f96a8c90f202d19f10"} Feb 27 17:07:12 crc kubenswrapper[4708]: I0227 17:07:12.937304 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" event={"ID":"5c087cb8-7024-4186-9e20-5620cdb2fd9a","Type":"ContainerStarted","Data":"5a3e739bf15640a976f241955617f149560b3d33393ced2bf1c35036623b031c"} Feb 27 17:07:13 crc kubenswrapper[4708]: I0227 17:07:13.228212 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:07:13 crc kubenswrapper[4708]: I0227 17:07:13.229364 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" Feb 27 17:07:13 crc kubenswrapper[4708]: I0227 17:07:13.726718 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx"] Feb 27 17:07:13 crc kubenswrapper[4708]: I0227 17:07:13.947190 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" event={"ID":"e46fb234-1f2b-4217-b76b-0e2900d525da","Type":"ContainerStarted","Data":"2d3190c853157d878bb2aabec9a3069a7343b2d4d2344f01cbeb35d561ba3977"} Feb 27 17:07:14 crc kubenswrapper[4708]: I0227 17:07:14.230221 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:14 crc kubenswrapper[4708]: I0227 17:07:14.231106 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:14 crc kubenswrapper[4708]: I0227 17:07:14.542135 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-x7wsw"] Feb 27 17:07:14 crc kubenswrapper[4708]: W0227 17:07:14.556447 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf03daf2a_7ba1_454e_a2fd_dd2e12631679.slice/crio-e77321e6172f45f4a8cd2b9331141f0efcefcb1f71d506a19275786ceed7e1ba WatchSource:0}: Error finding container e77321e6172f45f4a8cd2b9331141f0efcefcb1f71d506a19275786ceed7e1ba: Status 404 returned error can't find the container with id e77321e6172f45f4a8cd2b9331141f0efcefcb1f71d506a19275786ceed7e1ba Feb 27 17:07:14 crc kubenswrapper[4708]: I0227 17:07:14.954404 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" event={"ID":"f03daf2a-7ba1-454e-a2fd-dd2e12631679","Type":"ContainerStarted","Data":"e77321e6172f45f4a8cd2b9331141f0efcefcb1f71d506a19275786ceed7e1ba"} Feb 27 17:07:19 crc kubenswrapper[4708]: I0227 17:07:19.997537 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" event={"ID":"5c087cb8-7024-4186-9e20-5620cdb2fd9a","Type":"ContainerStarted","Data":"dee2ac474503847e83b6d4d92e49c7f3e3f604b2c5348e999e215706e66f63de"} Feb 27 17:07:19 crc kubenswrapper[4708]: I0227 17:07:19.998140 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:20 crc kubenswrapper[4708]: I0227 17:07:20.002329 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" event={"ID":"33badcb1-2622-423f-afe6-482b92342910","Type":"ContainerStarted","Data":"fcbc434a7e4659ce9e611038c332fc2337822dd52f24276bff8c8785e91f8e6a"} Feb 27 17:07:20 crc kubenswrapper[4708]: I0227 17:07:20.008277 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" event={"ID":"3b0277f7-658b-4897-b034-9aab6cacc59e","Type":"ContainerStarted","Data":"4a7757190534f30a36269af1d70eccb56f17ae13146b4247b42c7182530a9e57"} Feb 27 17:07:20 crc kubenswrapper[4708]: I0227 17:07:20.010053 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" event={"ID":"e46fb234-1f2b-4217-b76b-0e2900d525da","Type":"ContainerStarted","Data":"acb0e4195dec506450e48363b3247266ae746276b37cd0765f6fa49d2b287b8d"} Feb 27 17:07:20 crc kubenswrapper[4708]: I0227 17:07:20.038602 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" podStartSLOduration=19.737356351 podStartE2EDuration="26.038578631s" podCreationTimestamp="2026-02-27 17:06:54 +0000 UTC" firstStartedPulling="2026-02-27 17:07:12.747377108 +0000 UTC m=+831.263174685" lastFinishedPulling="2026-02-27 17:07:19.048599378 +0000 UTC m=+837.564396965" observedRunningTime="2026-02-27 17:07:20.015934133 +0000 UTC m=+838.531731730" watchObservedRunningTime="2026-02-27 17:07:20.038578631 +0000 UTC m=+838.554376218" Feb 27 17:07:20 crc kubenswrapper[4708]: I0227 17:07:20.063229 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx" podStartSLOduration=20.794475034 podStartE2EDuration="26.063208816s" podCreationTimestamp="2026-02-27 17:06:54 +0000 UTC" firstStartedPulling="2026-02-27 17:07:13.742052748 +0000 UTC m=+832.257850335" lastFinishedPulling="2026-02-27 17:07:19.01078652 +0000 UTC m=+837.526584117" observedRunningTime="2026-02-27 17:07:20.06299713 +0000 UTC m=+838.578794727" watchObservedRunningTime="2026-02-27 17:07:20.063208816 +0000 UTC m=+838.579006403" Feb 27 17:07:20 crc kubenswrapper[4708]: I0227 17:07:20.065253 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs" podStartSLOduration=19.638944892 podStartE2EDuration="26.065248515s" podCreationTimestamp="2026-02-27 17:06:54 +0000 UTC" firstStartedPulling="2026-02-27 17:07:12.598079252 +0000 UTC m=+831.113876849" lastFinishedPulling="2026-02-27 17:07:19.024382875 +0000 UTC m=+837.540180472" observedRunningTime="2026-02-27 17:07:20.041719002 +0000 UTC m=+838.557516589" watchObservedRunningTime="2026-02-27 17:07:20.065248515 +0000 UTC m=+838.581046102" Feb 27 17:07:20 crc kubenswrapper[4708]: I0227 17:07:20.091097 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-mnthm" podStartSLOduration=18.835021112 podStartE2EDuration="26.091065445s" podCreationTimestamp="2026-02-27 17:06:54 +0000 UTC" firstStartedPulling="2026-02-27 17:07:11.754674975 +0000 UTC m=+830.270472562" lastFinishedPulling="2026-02-27 17:07:19.010719298 +0000 UTC m=+837.526516895" observedRunningTime="2026-02-27 17:07:20.088573943 +0000 UTC m=+838.604371530" watchObservedRunningTime="2026-02-27 17:07:20.091065445 +0000 UTC m=+838.606863022" Feb 27 17:07:22 crc kubenswrapper[4708]: I0227 17:07:22.359501 4708 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Feb 27 17:07:22 crc kubenswrapper[4708]: I0227 17:07:22.928941 4708 scope.go:117] "RemoveContainer" containerID="ce0bbc3f6718c3f80abb77d64bf1761e9f580c0379391ab3ce6ef4aa8912c4a7" Feb 27 17:07:23 crc kubenswrapper[4708]: I0227 17:07:23.059842 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-5hk97" Feb 27 17:07:24 crc kubenswrapper[4708]: I0227 17:07:24.034529 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" event={"ID":"f03daf2a-7ba1-454e-a2fd-dd2e12631679","Type":"ContainerStarted","Data":"6588f9796db8d463cd92d04d5b8f4db5f77e2200d70d937b9129e7e0d86bff11"} Feb 27 17:07:24 crc kubenswrapper[4708]: I0227 17:07:24.035188 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:24 crc kubenswrapper[4708]: I0227 17:07:24.039192 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-p6n6j_2c5353a5-c388-4046-bb29-8e73352588c2/kube-multus/2.log" Feb 27 17:07:24 crc kubenswrapper[4708]: I0227 17:07:24.039373 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" Feb 27 17:07:24 crc kubenswrapper[4708]: I0227 17:07:24.069787 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-x7wsw" podStartSLOduration=21.406904784 podStartE2EDuration="30.069762987s" podCreationTimestamp="2026-02-27 17:06:54 +0000 UTC" firstStartedPulling="2026-02-27 17:07:14.559337017 +0000 UTC m=+833.075134604" lastFinishedPulling="2026-02-27 17:07:23.22219522 +0000 UTC m=+841.737992807" observedRunningTime="2026-02-27 17:07:24.068924163 +0000 UTC m=+842.584721790" watchObservedRunningTime="2026-02-27 17:07:24.069762987 +0000 UTC m=+842.585560614" Feb 27 17:07:25 crc kubenswrapper[4708]: I0227 17:07:25.273681 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-2qs44" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.556131 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-glgjb"] Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.557509 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.562478 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.562678 4708 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-zbsx5" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.562830 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.565546 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-glgjb"] Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.576969 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858654f9db-8qwnr"] Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.580751 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltqzk\" (UniqueName: \"kubernetes.io/projected/628c406f-d7a1-471d-a1b5-56413469baf9-kube-api-access-ltqzk\") pod \"cert-manager-cainjector-cf98fcc89-glgjb\" (UID: \"628c406f-d7a1-471d-a1b5-56413469baf9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.589051 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8qwnr" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.591589 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8qwnr"] Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.592797 4708 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-df2b4" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.642306 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vcm49"] Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.643101 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.644806 4708 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5x7p2" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.665058 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vcm49"] Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.681812 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqknl\" (UniqueName: \"kubernetes.io/projected/29594fd1-8f6b-4b90-aad4-0ef65bb098b3-kube-api-access-tqknl\") pod \"cert-manager-858654f9db-8qwnr\" (UID: \"29594fd1-8f6b-4b90-aad4-0ef65bb098b3\") " pod="cert-manager/cert-manager-858654f9db-8qwnr" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.681894 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltqzk\" (UniqueName: \"kubernetes.io/projected/628c406f-d7a1-471d-a1b5-56413469baf9-kube-api-access-ltqzk\") pod \"cert-manager-cainjector-cf98fcc89-glgjb\" (UID: \"628c406f-d7a1-471d-a1b5-56413469baf9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.681931 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wt8pp\" (UniqueName: \"kubernetes.io/projected/aa734fc8-e63b-4877-bc29-774dfdbc8768-kube-api-access-wt8pp\") pod \"cert-manager-webhook-687f57d79b-vcm49\" (UID: \"aa734fc8-e63b-4877-bc29-774dfdbc8768\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.703512 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltqzk\" (UniqueName: \"kubernetes.io/projected/628c406f-d7a1-471d-a1b5-56413469baf9-kube-api-access-ltqzk\") pod \"cert-manager-cainjector-cf98fcc89-glgjb\" (UID: \"628c406f-d7a1-471d-a1b5-56413469baf9\") " pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.783131 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wt8pp\" (UniqueName: \"kubernetes.io/projected/aa734fc8-e63b-4877-bc29-774dfdbc8768-kube-api-access-wt8pp\") pod \"cert-manager-webhook-687f57d79b-vcm49\" (UID: \"aa734fc8-e63b-4877-bc29-774dfdbc8768\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.783210 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tqknl\" (UniqueName: \"kubernetes.io/projected/29594fd1-8f6b-4b90-aad4-0ef65bb098b3-kube-api-access-tqknl\") pod \"cert-manager-858654f9db-8qwnr\" (UID: \"29594fd1-8f6b-4b90-aad4-0ef65bb098b3\") " pod="cert-manager/cert-manager-858654f9db-8qwnr" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.803431 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqknl\" (UniqueName: \"kubernetes.io/projected/29594fd1-8f6b-4b90-aad4-0ef65bb098b3-kube-api-access-tqknl\") pod \"cert-manager-858654f9db-8qwnr\" (UID: \"29594fd1-8f6b-4b90-aad4-0ef65bb098b3\") " pod="cert-manager/cert-manager-858654f9db-8qwnr" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.804445 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wt8pp\" (UniqueName: \"kubernetes.io/projected/aa734fc8-e63b-4877-bc29-774dfdbc8768-kube-api-access-wt8pp\") pod \"cert-manager-webhook-687f57d79b-vcm49\" (UID: \"aa734fc8-e63b-4877-bc29-774dfdbc8768\") " pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.918669 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.933742 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858654f9db-8qwnr" Feb 27 17:07:34 crc kubenswrapper[4708]: I0227 17:07:34.964493 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" Feb 27 17:07:35 crc kubenswrapper[4708]: I0227 17:07:35.231179 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858654f9db-8qwnr"] Feb 27 17:07:35 crc kubenswrapper[4708]: W0227 17:07:35.237798 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29594fd1_8f6b_4b90_aad4_0ef65bb098b3.slice/crio-46f37498feaebe8f3d4fa80fd3133ff6074fb29fbc7a53f2fa8c9f72a852adc1 WatchSource:0}: Error finding container 46f37498feaebe8f3d4fa80fd3133ff6074fb29fbc7a53f2fa8c9f72a852adc1: Status 404 returned error can't find the container with id 46f37498feaebe8f3d4fa80fd3133ff6074fb29fbc7a53f2fa8c9f72a852adc1 Feb 27 17:07:35 crc kubenswrapper[4708]: I0227 17:07:35.242590 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-cf98fcc89-glgjb"] Feb 27 17:07:35 crc kubenswrapper[4708]: W0227 17:07:35.251198 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod628c406f_d7a1_471d_a1b5_56413469baf9.slice/crio-fbc1746e7655612dc33a2ec22a4d045406cd1e6003f8f5a33ac0b66d6a59b084 WatchSource:0}: Error finding container fbc1746e7655612dc33a2ec22a4d045406cd1e6003f8f5a33ac0b66d6a59b084: Status 404 returned error can't find the container with id fbc1746e7655612dc33a2ec22a4d045406cd1e6003f8f5a33ac0b66d6a59b084 Feb 27 17:07:35 crc kubenswrapper[4708]: I0227 17:07:35.293840 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-687f57d79b-vcm49"] Feb 27 17:07:35 crc kubenswrapper[4708]: W0227 17:07:35.311461 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaa734fc8_e63b_4877_bc29_774dfdbc8768.slice/crio-66e50efab49501d0f7abf77e6127bf767ce7af46ca2d8e0fb6f46a28fc6d155e WatchSource:0}: Error finding container 66e50efab49501d0f7abf77e6127bf767ce7af46ca2d8e0fb6f46a28fc6d155e: Status 404 returned error can't find the container with id 66e50efab49501d0f7abf77e6127bf767ce7af46ca2d8e0fb6f46a28fc6d155e Feb 27 17:07:36 crc kubenswrapper[4708]: I0227 17:07:36.131716 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" event={"ID":"628c406f-d7a1-471d-a1b5-56413469baf9","Type":"ContainerStarted","Data":"fbc1746e7655612dc33a2ec22a4d045406cd1e6003f8f5a33ac0b66d6a59b084"} Feb 27 17:07:36 crc kubenswrapper[4708]: I0227 17:07:36.133444 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" event={"ID":"aa734fc8-e63b-4877-bc29-774dfdbc8768","Type":"ContainerStarted","Data":"66e50efab49501d0f7abf77e6127bf767ce7af46ca2d8e0fb6f46a28fc6d155e"} Feb 27 17:07:36 crc kubenswrapper[4708]: I0227 17:07:36.135146 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8qwnr" event={"ID":"29594fd1-8f6b-4b90-aad4-0ef65bb098b3","Type":"ContainerStarted","Data":"46f37498feaebe8f3d4fa80fd3133ff6074fb29fbc7a53f2fa8c9f72a852adc1"} Feb 27 17:07:40 crc kubenswrapper[4708]: I0227 17:07:40.164748 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858654f9db-8qwnr" event={"ID":"29594fd1-8f6b-4b90-aad4-0ef65bb098b3","Type":"ContainerStarted","Data":"50d1bc7a916a6c2ec48b58d71cfae82e4978e646ca6ef3dc7482dc56276f9705"} Feb 27 17:07:40 crc kubenswrapper[4708]: I0227 17:07:40.191104 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858654f9db-8qwnr" podStartSLOduration=1.5236385270000001 podStartE2EDuration="6.191076684s" podCreationTimestamp="2026-02-27 17:07:34 +0000 UTC" firstStartedPulling="2026-02-27 17:07:35.243380232 +0000 UTC m=+853.759177819" lastFinishedPulling="2026-02-27 17:07:39.910818379 +0000 UTC m=+858.426615976" observedRunningTime="2026-02-27 17:07:40.183100585 +0000 UTC m=+858.698898192" watchObservedRunningTime="2026-02-27 17:07:40.191076684 +0000 UTC m=+858.706874281" Feb 27 17:07:41 crc kubenswrapper[4708]: I0227 17:07:41.173142 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" event={"ID":"628c406f-d7a1-471d-a1b5-56413469baf9","Type":"ContainerStarted","Data":"75c57a31f0cf6eeac09937a9a9f161d948577a3685d19dec3334f7b73445df66"} Feb 27 17:07:41 crc kubenswrapper[4708]: I0227 17:07:41.175242 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" event={"ID":"aa734fc8-e63b-4877-bc29-774dfdbc8768","Type":"ContainerStarted","Data":"d570ca6c25aa586349bd59969f1af78b3e837eb13327df84b394f43cde99279f"} Feb 27 17:07:41 crc kubenswrapper[4708]: I0227 17:07:41.192303 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-cf98fcc89-glgjb" podStartSLOduration=2.569674506 podStartE2EDuration="7.19228487s" podCreationTimestamp="2026-02-27 17:07:34 +0000 UTC" firstStartedPulling="2026-02-27 17:07:35.253010078 +0000 UTC m=+853.768807665" lastFinishedPulling="2026-02-27 17:07:39.875620402 +0000 UTC m=+858.391418029" observedRunningTime="2026-02-27 17:07:41.190067287 +0000 UTC m=+859.705864904" watchObservedRunningTime="2026-02-27 17:07:41.19228487 +0000 UTC m=+859.708082467" Feb 27 17:07:41 crc kubenswrapper[4708]: I0227 17:07:41.240328 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" podStartSLOduration=2.637345014 podStartE2EDuration="7.240298545s" podCreationTimestamp="2026-02-27 17:07:34 +0000 UTC" firstStartedPulling="2026-02-27 17:07:35.314598271 +0000 UTC m=+853.830395858" lastFinishedPulling="2026-02-27 17:07:39.917551732 +0000 UTC m=+858.433349389" observedRunningTime="2026-02-27 17:07:41.234402136 +0000 UTC m=+859.750199753" watchObservedRunningTime="2026-02-27 17:07:41.240298545 +0000 UTC m=+859.756096172" Feb 27 17:07:42 crc kubenswrapper[4708]: I0227 17:07:42.183811 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" Feb 27 17:07:49 crc kubenswrapper[4708]: I0227 17:07:49.969118 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-687f57d79b-vcm49" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.152308 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536868-vzmzz"] Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.154342 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.157027 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.157818 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.157926 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.164495 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-vzmzz"] Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.234589 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69h5q\" (UniqueName: \"kubernetes.io/projected/6ec21f8e-82e7-4d31-bc5e-906388eef4e0-kube-api-access-69h5q\") pod \"auto-csr-approver-29536868-vzmzz\" (UID: \"6ec21f8e-82e7-4d31-bc5e-906388eef4e0\") " pod="openshift-infra/auto-csr-approver-29536868-vzmzz" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.336670 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69h5q\" (UniqueName: \"kubernetes.io/projected/6ec21f8e-82e7-4d31-bc5e-906388eef4e0-kube-api-access-69h5q\") pod \"auto-csr-approver-29536868-vzmzz\" (UID: \"6ec21f8e-82e7-4d31-bc5e-906388eef4e0\") " pod="openshift-infra/auto-csr-approver-29536868-vzmzz" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.370143 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69h5q\" (UniqueName: \"kubernetes.io/projected/6ec21f8e-82e7-4d31-bc5e-906388eef4e0-kube-api-access-69h5q\") pod \"auto-csr-approver-29536868-vzmzz\" (UID: \"6ec21f8e-82e7-4d31-bc5e-906388eef4e0\") " pod="openshift-infra/auto-csr-approver-29536868-vzmzz" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.496230 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" Feb 27 17:08:00 crc kubenswrapper[4708]: I0227 17:08:00.803503 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-vzmzz"] Feb 27 17:08:00 crc kubenswrapper[4708]: W0227 17:08:00.806470 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6ec21f8e_82e7_4d31_bc5e_906388eef4e0.slice/crio-25efca911b71d5c4aaad8058feda5610acb3804c40bccbba8eed4d4362680d61 WatchSource:0}: Error finding container 25efca911b71d5c4aaad8058feda5610acb3804c40bccbba8eed4d4362680d61: Status 404 returned error can't find the container with id 25efca911b71d5c4aaad8058feda5610acb3804c40bccbba8eed4d4362680d61 Feb 27 17:08:01 crc kubenswrapper[4708]: I0227 17:08:01.322315 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" event={"ID":"6ec21f8e-82e7-4d31-bc5e-906388eef4e0","Type":"ContainerStarted","Data":"25efca911b71d5c4aaad8058feda5610acb3804c40bccbba8eed4d4362680d61"} Feb 27 17:08:02 crc kubenswrapper[4708]: I0227 17:08:02.331727 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" event={"ID":"6ec21f8e-82e7-4d31-bc5e-906388eef4e0","Type":"ContainerStarted","Data":"cc23d8027d898e28570823e7a4b6dc0a8dcf81eabc27455dc4775141efb2084c"} Feb 27 17:08:02 crc kubenswrapper[4708]: I0227 17:08:02.357065 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" podStartSLOduration=1.383398089 podStartE2EDuration="2.357034726s" podCreationTimestamp="2026-02-27 17:08:00 +0000 UTC" firstStartedPulling="2026-02-27 17:08:00.808440046 +0000 UTC m=+879.324237633" lastFinishedPulling="2026-02-27 17:08:01.782076653 +0000 UTC m=+880.297874270" observedRunningTime="2026-02-27 17:08:02.35301515 +0000 UTC m=+880.868812767" watchObservedRunningTime="2026-02-27 17:08:02.357034726 +0000 UTC m=+880.872832343" Feb 27 17:08:03 crc kubenswrapper[4708]: I0227 17:08:03.341439 4708 generic.go:334] "Generic (PLEG): container finished" podID="6ec21f8e-82e7-4d31-bc5e-906388eef4e0" containerID="cc23d8027d898e28570823e7a4b6dc0a8dcf81eabc27455dc4775141efb2084c" exitCode=0 Feb 27 17:08:03 crc kubenswrapper[4708]: I0227 17:08:03.341500 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" event={"ID":"6ec21f8e-82e7-4d31-bc5e-906388eef4e0","Type":"ContainerDied","Data":"cc23d8027d898e28570823e7a4b6dc0a8dcf81eabc27455dc4775141efb2084c"} Feb 27 17:08:04 crc kubenswrapper[4708]: I0227 17:08:04.696954 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" Feb 27 17:08:04 crc kubenswrapper[4708]: I0227 17:08:04.802571 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69h5q\" (UniqueName: \"kubernetes.io/projected/6ec21f8e-82e7-4d31-bc5e-906388eef4e0-kube-api-access-69h5q\") pod \"6ec21f8e-82e7-4d31-bc5e-906388eef4e0\" (UID: \"6ec21f8e-82e7-4d31-bc5e-906388eef4e0\") " Feb 27 17:08:04 crc kubenswrapper[4708]: I0227 17:08:04.811101 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ec21f8e-82e7-4d31-bc5e-906388eef4e0-kube-api-access-69h5q" (OuterVolumeSpecName: "kube-api-access-69h5q") pod "6ec21f8e-82e7-4d31-bc5e-906388eef4e0" (UID: "6ec21f8e-82e7-4d31-bc5e-906388eef4e0"). InnerVolumeSpecName "kube-api-access-69h5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:08:04 crc kubenswrapper[4708]: I0227 17:08:04.904332 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69h5q\" (UniqueName: \"kubernetes.io/projected/6ec21f8e-82e7-4d31-bc5e-906388eef4e0-kube-api-access-69h5q\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:05 crc kubenswrapper[4708]: I0227 17:08:05.329902 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-qkswc"] Feb 27 17:08:05 crc kubenswrapper[4708]: I0227 17:08:05.338590 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536862-qkswc"] Feb 27 17:08:05 crc kubenswrapper[4708]: I0227 17:08:05.360463 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" event={"ID":"6ec21f8e-82e7-4d31-bc5e-906388eef4e0","Type":"ContainerDied","Data":"25efca911b71d5c4aaad8058feda5610acb3804c40bccbba8eed4d4362680d61"} Feb 27 17:08:05 crc kubenswrapper[4708]: I0227 17:08:05.360515 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25efca911b71d5c4aaad8058feda5610acb3804c40bccbba8eed4d4362680d61" Feb 27 17:08:05 crc kubenswrapper[4708]: I0227 17:08:05.360538 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536868-vzmzz" Feb 27 17:08:06 crc kubenswrapper[4708]: I0227 17:08:06.241238 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcfc32db-1f2d-454c-ac76-baba5f5423f6" path="/var/lib/kubelet/pods/fcfc32db-1f2d-454c-ac76-baba5f5423f6/volumes" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.764150 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x"] Feb 27 17:08:20 crc kubenswrapper[4708]: E0227 17:08:20.766413 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ec21f8e-82e7-4d31-bc5e-906388eef4e0" containerName="oc" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.767374 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ec21f8e-82e7-4d31-bc5e-906388eef4e0" containerName="oc" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.767708 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ec21f8e-82e7-4d31-bc5e-906388eef4e0" containerName="oc" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.769305 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.771902 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.791893 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x"] Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.818416 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.818468 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8df6\" (UniqueName: \"kubernetes.io/projected/bba75f99-dc6c-4c6a-ae97-e636ed291513-kube-api-access-q8df6\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.818684 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.920383 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.920620 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8df6\" (UniqueName: \"kubernetes.io/projected/bba75f99-dc6c-4c6a-ae97-e636ed291513-kube-api-access-q8df6\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.920774 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.920935 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-bundle\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.922032 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-util\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:20 crc kubenswrapper[4708]: I0227 17:08:20.941924 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8df6\" (UniqueName: \"kubernetes.io/projected/bba75f99-dc6c-4c6a-ae97-e636ed291513-kube-api-access-q8df6\") pod \"7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:21 crc kubenswrapper[4708]: I0227 17:08:21.089905 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:21 crc kubenswrapper[4708]: I0227 17:08:21.360451 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x"] Feb 27 17:08:21 crc kubenswrapper[4708]: I0227 17:08:21.492232 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" event={"ID":"bba75f99-dc6c-4c6a-ae97-e636ed291513","Type":"ContainerStarted","Data":"5358a4ce443f9389c6a410d32845a38f4b5854ceea5c8027612bf2c76db9f689"} Feb 27 17:08:22 crc kubenswrapper[4708]: I0227 17:08:22.500386 4708 generic.go:334] "Generic (PLEG): container finished" podID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerID="b2c33910e2f852b974a4ca709b5a78edd6415afa5dc4eb58fd37cf62e740baad" exitCode=0 Feb 27 17:08:22 crc kubenswrapper[4708]: I0227 17:08:22.500453 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" event={"ID":"bba75f99-dc6c-4c6a-ae97-e636ed291513","Type":"ContainerDied","Data":"b2c33910e2f852b974a4ca709b5a78edd6415afa5dc4eb58fd37cf62e740baad"} Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.069645 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9k4ch"] Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.070628 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.089913 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9k4ch"] Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.116584 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["minio-dev/minio"] Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.117242 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.118781 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"kube-root-ca.crt" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.119184 4708 reflector.go:368] Caches populated for *v1.Secret from object-"minio-dev"/"default-dockercfg-627cf" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.120205 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"minio-dev"/"openshift-service-ca.crt" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.128036 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.146986 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-utilities\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.147031 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-catalog-content\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.147054 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4s5ws\" (UniqueName: \"kubernetes.io/projected/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-kube-api-access-4s5ws\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.226741 4708 scope.go:117] "RemoveContainer" containerID="665b03431ec166b8505cdc2e5a8f29e173ed7bdbbfe9cf74fe04d7744bd0872f" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.248479 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-utilities\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.248678 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-catalog-content\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.248702 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4s5ws\" (UniqueName: \"kubernetes.io/projected/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-kube-api-access-4s5ws\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.248749 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-67296f9d-35d2-4628-852a-e718d78d15ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67296f9d-35d2-4628-852a-e718d78d15ac\") pod \"minio\" (UID: \"c64d99a5-527e-4594-b1a1-92d576de45a6\") " pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.248771 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lh9z\" (UniqueName: \"kubernetes.io/projected/c64d99a5-527e-4594-b1a1-92d576de45a6-kube-api-access-4lh9z\") pod \"minio\" (UID: \"c64d99a5-527e-4594-b1a1-92d576de45a6\") " pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.248937 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-utilities\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.249100 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-catalog-content\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.272627 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4s5ws\" (UniqueName: \"kubernetes.io/projected/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-kube-api-access-4s5ws\") pod \"redhat-operators-9k4ch\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.349891 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-67296f9d-35d2-4628-852a-e718d78d15ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67296f9d-35d2-4628-852a-e718d78d15ac\") pod \"minio\" (UID: \"c64d99a5-527e-4594-b1a1-92d576de45a6\") " pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.349936 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lh9z\" (UniqueName: \"kubernetes.io/projected/c64d99a5-527e-4594-b1a1-92d576de45a6-kube-api-access-4lh9z\") pod \"minio\" (UID: \"c64d99a5-527e-4594-b1a1-92d576de45a6\") " pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.361929 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.361975 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-67296f9d-35d2-4628-852a-e718d78d15ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67296f9d-35d2-4628-852a-e718d78d15ac\") pod \"minio\" (UID: \"c64d99a5-527e-4594-b1a1-92d576de45a6\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/2e162cdea7d9582178c33d2650257821dcc48096030ec763db33876baa5d7a79/globalmount\"" pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.368499 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lh9z\" (UniqueName: \"kubernetes.io/projected/c64d99a5-527e-4594-b1a1-92d576de45a6-kube-api-access-4lh9z\") pod \"minio\" (UID: \"c64d99a5-527e-4594-b1a1-92d576de45a6\") " pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.383272 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.385091 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-67296f9d-35d2-4628-852a-e718d78d15ac\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-67296f9d-35d2-4628-852a-e718d78d15ac\") pod \"minio\" (UID: \"c64d99a5-527e-4594-b1a1-92d576de45a6\") " pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.449381 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="minio-dev/minio" Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.683830 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["minio-dev/minio"] Feb 27 17:08:23 crc kubenswrapper[4708]: W0227 17:08:23.691930 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc64d99a5_527e_4594_b1a1_92d576de45a6.slice/crio-a634ca521b69ba3c455662009b41b8f558e8ff4be9b2bd44466f6d3a98e7b6a6 WatchSource:0}: Error finding container a634ca521b69ba3c455662009b41b8f558e8ff4be9b2bd44466f6d3a98e7b6a6: Status 404 returned error can't find the container with id a634ca521b69ba3c455662009b41b8f558e8ff4be9b2bd44466f6d3a98e7b6a6 Feb 27 17:08:23 crc kubenswrapper[4708]: I0227 17:08:23.800490 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9k4ch"] Feb 27 17:08:23 crc kubenswrapper[4708]: W0227 17:08:23.809659 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad67feec_cfdb_4647_a7a0_30b2ee10f2f5.slice/crio-a67421acd909ea6484dc63711c6db18779dc318ea0d689620494d708a3b0a84c WatchSource:0}: Error finding container a67421acd909ea6484dc63711c6db18779dc318ea0d689620494d708a3b0a84c: Status 404 returned error can't find the container with id a67421acd909ea6484dc63711c6db18779dc318ea0d689620494d708a3b0a84c Feb 27 17:08:23 crc kubenswrapper[4708]: E0227 17:08:23.898097 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbba75f99_dc6c_4c6a_ae97_e636ed291513.slice/crio-conmon-a16fd3c8e04c19b3acc94d38d4d613f20f632fcc33962b06b22513954c210be8.scope\": RecentStats: unable to find data in memory cache]" Feb 27 17:08:24 crc kubenswrapper[4708]: I0227 17:08:24.518692 4708 generic.go:334] "Generic (PLEG): container finished" podID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerID="254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb" exitCode=0 Feb 27 17:08:24 crc kubenswrapper[4708]: I0227 17:08:24.518741 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k4ch" event={"ID":"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5","Type":"ContainerDied","Data":"254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb"} Feb 27 17:08:24 crc kubenswrapper[4708]: I0227 17:08:24.519936 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k4ch" event={"ID":"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5","Type":"ContainerStarted","Data":"a67421acd909ea6484dc63711c6db18779dc318ea0d689620494d708a3b0a84c"} Feb 27 17:08:24 crc kubenswrapper[4708]: I0227 17:08:24.531795 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c64d99a5-527e-4594-b1a1-92d576de45a6","Type":"ContainerStarted","Data":"a634ca521b69ba3c455662009b41b8f558e8ff4be9b2bd44466f6d3a98e7b6a6"} Feb 27 17:08:24 crc kubenswrapper[4708]: I0227 17:08:24.533933 4708 generic.go:334] "Generic (PLEG): container finished" podID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerID="a16fd3c8e04c19b3acc94d38d4d613f20f632fcc33962b06b22513954c210be8" exitCode=0 Feb 27 17:08:24 crc kubenswrapper[4708]: I0227 17:08:24.533970 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" event={"ID":"bba75f99-dc6c-4c6a-ae97-e636ed291513","Type":"ContainerDied","Data":"a16fd3c8e04c19b3acc94d38d4d613f20f632fcc33962b06b22513954c210be8"} Feb 27 17:08:25 crc kubenswrapper[4708]: I0227 17:08:25.543794 4708 generic.go:334] "Generic (PLEG): container finished" podID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerID="80b094dc3709a7cba5ef20040ecd70eda33dc5f44bd6215c9abf243f8c7d4fc7" exitCode=0 Feb 27 17:08:25 crc kubenswrapper[4708]: I0227 17:08:25.543996 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" event={"ID":"bba75f99-dc6c-4c6a-ae97-e636ed291513","Type":"ContainerDied","Data":"80b094dc3709a7cba5ef20040ecd70eda33dc5f44bd6215c9abf243f8c7d4fc7"} Feb 27 17:08:26 crc kubenswrapper[4708]: I0227 17:08:26.551801 4708 generic.go:334] "Generic (PLEG): container finished" podID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerID="ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020" exitCode=0 Feb 27 17:08:26 crc kubenswrapper[4708]: I0227 17:08:26.552931 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k4ch" event={"ID":"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5","Type":"ContainerDied","Data":"ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020"} Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.240665 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.299301 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8df6\" (UniqueName: \"kubernetes.io/projected/bba75f99-dc6c-4c6a-ae97-e636ed291513-kube-api-access-q8df6\") pod \"bba75f99-dc6c-4c6a-ae97-e636ed291513\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.299403 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-bundle\") pod \"bba75f99-dc6c-4c6a-ae97-e636ed291513\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.299464 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-util\") pod \"bba75f99-dc6c-4c6a-ae97-e636ed291513\" (UID: \"bba75f99-dc6c-4c6a-ae97-e636ed291513\") " Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.301312 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-bundle" (OuterVolumeSpecName: "bundle") pod "bba75f99-dc6c-4c6a-ae97-e636ed291513" (UID: "bba75f99-dc6c-4c6a-ae97-e636ed291513"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.308142 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bba75f99-dc6c-4c6a-ae97-e636ed291513-kube-api-access-q8df6" (OuterVolumeSpecName: "kube-api-access-q8df6") pod "bba75f99-dc6c-4c6a-ae97-e636ed291513" (UID: "bba75f99-dc6c-4c6a-ae97-e636ed291513"). InnerVolumeSpecName "kube-api-access-q8df6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.313232 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-util" (OuterVolumeSpecName: "util") pod "bba75f99-dc6c-4c6a-ae97-e636ed291513" (UID: "bba75f99-dc6c-4c6a-ae97-e636ed291513"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.401061 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8df6\" (UniqueName: \"kubernetes.io/projected/bba75f99-dc6c-4c6a-ae97-e636ed291513-kube-api-access-q8df6\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.401096 4708 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.401113 4708 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bba75f99-dc6c-4c6a-ae97-e636ed291513-util\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.561937 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" event={"ID":"bba75f99-dc6c-4c6a-ae97-e636ed291513","Type":"ContainerDied","Data":"5358a4ce443f9389c6a410d32845a38f4b5854ceea5c8027612bf2c76db9f689"} Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.561968 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5358a4ce443f9389c6a410d32845a38f4b5854ceea5c8027612bf2c76db9f689" Feb 27 17:08:27 crc kubenswrapper[4708]: I0227 17:08:27.562046 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x" Feb 27 17:08:28 crc kubenswrapper[4708]: I0227 17:08:28.571700 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="minio-dev/minio" event={"ID":"c64d99a5-527e-4594-b1a1-92d576de45a6","Type":"ContainerStarted","Data":"f0ff5b2e2969139c1b0db42f201ce8ad551c6392f1ce3f04a1bfc271b353d43e"} Feb 27 17:08:28 crc kubenswrapper[4708]: I0227 17:08:28.575754 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k4ch" event={"ID":"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5","Type":"ContainerStarted","Data":"866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f"} Feb 27 17:08:28 crc kubenswrapper[4708]: I0227 17:08:28.597022 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="minio-dev/minio" podStartSLOduration=4.753835766 podStartE2EDuration="8.597005953s" podCreationTimestamp="2026-02-27 17:08:20 +0000 UTC" firstStartedPulling="2026-02-27 17:08:23.69638522 +0000 UTC m=+902.212182807" lastFinishedPulling="2026-02-27 17:08:27.539555397 +0000 UTC m=+906.055352994" observedRunningTime="2026-02-27 17:08:28.593009259 +0000 UTC m=+907.108806856" watchObservedRunningTime="2026-02-27 17:08:28.597005953 +0000 UTC m=+907.112803550" Feb 27 17:08:28 crc kubenswrapper[4708]: I0227 17:08:28.636780 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9k4ch" podStartSLOduration=2.6443286219999997 podStartE2EDuration="5.636753701s" podCreationTimestamp="2026-02-27 17:08:23 +0000 UTC" firstStartedPulling="2026-02-27 17:08:24.528022981 +0000 UTC m=+903.043820568" lastFinishedPulling="2026-02-27 17:08:27.52044805 +0000 UTC m=+906.036245647" observedRunningTime="2026-02-27 17:08:28.633822077 +0000 UTC m=+907.149619704" watchObservedRunningTime="2026-02-27 17:08:28.636753701 +0000 UTC m=+907.152551328" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.384361 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.384861 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.458353 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r"] Feb 27 17:08:33 crc kubenswrapper[4708]: E0227 17:08:33.458718 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerName="pull" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.458731 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerName="pull" Feb 27 17:08:33 crc kubenswrapper[4708]: E0227 17:08:33.458742 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerName="extract" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.458747 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerName="extract" Feb 27 17:08:33 crc kubenswrapper[4708]: E0227 17:08:33.458766 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerName="util" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.458772 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerName="util" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.458900 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bba75f99-dc6c-4c6a-ae97-e636ed291513" containerName="extract" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.459432 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.461555 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"openshift-service-ca.crt" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.461618 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-service-cert" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.461901 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-controller-manager-dockercfg-lts8z" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.463198 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"loki-operator-manager-config" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.463235 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators-redhat"/"loki-operator-metrics" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.464760 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators-redhat"/"kube-root-ca.crt" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.478413 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r"] Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.575548 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-manager-config\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.575600 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-webhook-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.575628 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.575649 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-apiservice-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.575684 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqctx\" (UniqueName: \"kubernetes.io/projected/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-kube-api-access-fqctx\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.676402 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.676451 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-apiservice-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.676489 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqctx\" (UniqueName: \"kubernetes.io/projected/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-kube-api-access-fqctx\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.676533 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-manager-config\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.676559 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-webhook-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.678365 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manager-config\" (UniqueName: \"kubernetes.io/configmap/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-manager-config\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.682105 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-webhook-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.688510 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-apiservice-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.689391 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"loki-operator-metrics-cert\" (UniqueName: \"kubernetes.io/secret/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-loki-operator-metrics-cert\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.698150 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqctx\" (UniqueName: \"kubernetes.io/projected/64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37-kube-api-access-fqctx\") pod \"loki-operator-controller-manager-5545944799-2z66r\" (UID: \"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37\") " pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.778366 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:33 crc kubenswrapper[4708]: I0227 17:08:33.996062 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r"] Feb 27 17:08:34 crc kubenswrapper[4708]: I0227 17:08:34.442080 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9k4ch" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="registry-server" probeResult="failure" output=< Feb 27 17:08:34 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:08:34 crc kubenswrapper[4708]: > Feb 27 17:08:34 crc kubenswrapper[4708]: I0227 17:08:34.609593 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" event={"ID":"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37","Type":"ContainerStarted","Data":"10f45953d04150574ea760ff9e38715e1816981e85db0ed027f8af631f282290"} Feb 27 17:08:39 crc kubenswrapper[4708]: I0227 17:08:39.658748 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" event={"ID":"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37","Type":"ContainerStarted","Data":"7455ae4124fadfb579c62b6e4be5d8dbcdba218642469a4ef96cc6afe344263a"} Feb 27 17:08:43 crc kubenswrapper[4708]: I0227 17:08:43.448934 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:43 crc kubenswrapper[4708]: I0227 17:08:43.513611 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:43 crc kubenswrapper[4708]: I0227 17:08:43.880763 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9k4ch"] Feb 27 17:08:44 crc kubenswrapper[4708]: I0227 17:08:44.694233 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9k4ch" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="registry-server" containerID="cri-o://866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f" gracePeriod=2 Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.408688 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.589285 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-utilities\") pod \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.589688 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4s5ws\" (UniqueName: \"kubernetes.io/projected/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-kube-api-access-4s5ws\") pod \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.589738 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-catalog-content\") pod \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\" (UID: \"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5\") " Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.590836 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-utilities" (OuterVolumeSpecName: "utilities") pod "ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" (UID: "ad67feec-cfdb-4647-a7a0-30b2ee10f2f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.597603 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-kube-api-access-4s5ws" (OuterVolumeSpecName: "kube-api-access-4s5ws") pod "ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" (UID: "ad67feec-cfdb-4647-a7a0-30b2ee10f2f5"). InnerVolumeSpecName "kube-api-access-4s5ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.691456 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.691496 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4s5ws\" (UniqueName: \"kubernetes.io/projected/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-kube-api-access-4s5ws\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.715176 4708 generic.go:334] "Generic (PLEG): container finished" podID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerID="866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f" exitCode=0 Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.715244 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9k4ch" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.715301 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k4ch" event={"ID":"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5","Type":"ContainerDied","Data":"866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f"} Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.715377 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9k4ch" event={"ID":"ad67feec-cfdb-4647-a7a0-30b2ee10f2f5","Type":"ContainerDied","Data":"a67421acd909ea6484dc63711c6db18779dc318ea0d689620494d708a3b0a84c"} Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.715412 4708 scope.go:117] "RemoveContainer" containerID="866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.721128 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" event={"ID":"64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37","Type":"ContainerStarted","Data":"b5b56a724062270d47e99f3ff97f1971cc2b1215ad14f30d08ce7a81adf6c4fb"} Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.722648 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.726931 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.741820 4708 scope.go:117] "RemoveContainer" containerID="ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.751278 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" (UID: "ad67feec-cfdb-4647-a7a0-30b2ee10f2f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.761084 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators-redhat/loki-operator-controller-manager-5545944799-2z66r" podStartSLOduration=1.568013305 podStartE2EDuration="12.761058842s" podCreationTimestamp="2026-02-27 17:08:33 +0000 UTC" firstStartedPulling="2026-02-27 17:08:34.005352234 +0000 UTC m=+912.521149821" lastFinishedPulling="2026-02-27 17:08:45.198397781 +0000 UTC m=+923.714195358" observedRunningTime="2026-02-27 17:08:45.75191963 +0000 UTC m=+924.267717257" watchObservedRunningTime="2026-02-27 17:08:45.761058842 +0000 UTC m=+924.276856469" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.781766 4708 scope.go:117] "RemoveContainer" containerID="254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.795521 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.810424 4708 scope.go:117] "RemoveContainer" containerID="866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f" Feb 27 17:08:45 crc kubenswrapper[4708]: E0227 17:08:45.811327 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f\": container with ID starting with 866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f not found: ID does not exist" containerID="866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.811492 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f"} err="failed to get container status \"866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f\": rpc error: code = NotFound desc = could not find container \"866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f\": container with ID starting with 866c17f3a2d5ed284794672cb8d293c8f8b8816bbe895f9c0bb3ac9f6debd37f not found: ID does not exist" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.811653 4708 scope.go:117] "RemoveContainer" containerID="ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020" Feb 27 17:08:45 crc kubenswrapper[4708]: E0227 17:08:45.812347 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020\": container with ID starting with ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020 not found: ID does not exist" containerID="ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.812521 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020"} err="failed to get container status \"ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020\": rpc error: code = NotFound desc = could not find container \"ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020\": container with ID starting with ccbca2044c0d44382d09222d1c2e620738240a6061b684b46ac843d7309a9020 not found: ID does not exist" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.812679 4708 scope.go:117] "RemoveContainer" containerID="254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb" Feb 27 17:08:45 crc kubenswrapper[4708]: E0227 17:08:45.813346 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb\": container with ID starting with 254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb not found: ID does not exist" containerID="254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb" Feb 27 17:08:45 crc kubenswrapper[4708]: I0227 17:08:45.813407 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb"} err="failed to get container status \"254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb\": rpc error: code = NotFound desc = could not find container \"254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb\": container with ID starting with 254a7e26fe31003dea3d3e7a6d88c69c034da06dacd010539a3b34add1f29feb not found: ID does not exist" Feb 27 17:08:46 crc kubenswrapper[4708]: I0227 17:08:46.054447 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9k4ch"] Feb 27 17:08:46 crc kubenswrapper[4708]: I0227 17:08:46.058472 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9k4ch"] Feb 27 17:08:46 crc kubenswrapper[4708]: I0227 17:08:46.240169 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" path="/var/lib/kubelet/pods/ad67feec-cfdb-4647-a7a0-30b2ee10f2f5/volumes" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.824868 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-pphmc"] Feb 27 17:08:55 crc kubenswrapper[4708]: E0227 17:08:55.825514 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="registry-server" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.825527 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="registry-server" Feb 27 17:08:55 crc kubenswrapper[4708]: E0227 17:08:55.825545 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="extract-content" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.825551 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="extract-content" Feb 27 17:08:55 crc kubenswrapper[4708]: E0227 17:08:55.825566 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="extract-utilities" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.825572 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="extract-utilities" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.825660 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad67feec-cfdb-4647-a7a0-30b2ee10f2f5" containerName="registry-server" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.826356 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.853416 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pphmc"] Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.936757 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5pxb\" (UniqueName: \"kubernetes.io/projected/2189486c-1c43-4445-b6d7-299b365ce2f5-kube-api-access-g5pxb\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.937243 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-utilities\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:55 crc kubenswrapper[4708]: I0227 17:08:55.937315 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-catalog-content\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.037886 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-utilities\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.038038 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-catalog-content\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.038440 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-utilities\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.038639 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-catalog-content\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.039232 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g5pxb\" (UniqueName: \"kubernetes.io/projected/2189486c-1c43-4445-b6d7-299b365ce2f5-kube-api-access-g5pxb\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.063784 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g5pxb\" (UniqueName: \"kubernetes.io/projected/2189486c-1c43-4445-b6d7-299b365ce2f5-kube-api-access-g5pxb\") pod \"redhat-marketplace-pphmc\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.150549 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.329541 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-pphmc"] Feb 27 17:08:56 crc kubenswrapper[4708]: W0227 17:08:56.341774 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2189486c_1c43_4445_b6d7_299b365ce2f5.slice/crio-70fd05f96f3a75327ad457074d15bd6a6184e77922b1c98cfa3abe453eb93ca8 WatchSource:0}: Error finding container 70fd05f96f3a75327ad457074d15bd6a6184e77922b1c98cfa3abe453eb93ca8: Status 404 returned error can't find the container with id 70fd05f96f3a75327ad457074d15bd6a6184e77922b1c98cfa3abe453eb93ca8 Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.805018 4708 generic.go:334] "Generic (PLEG): container finished" podID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerID="7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e" exitCode=0 Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.805091 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pphmc" event={"ID":"2189486c-1c43-4445-b6d7-299b365ce2f5","Type":"ContainerDied","Data":"7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e"} Feb 27 17:08:56 crc kubenswrapper[4708]: I0227 17:08:56.805151 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pphmc" event={"ID":"2189486c-1c43-4445-b6d7-299b365ce2f5","Type":"ContainerStarted","Data":"70fd05f96f3a75327ad457074d15bd6a6184e77922b1c98cfa3abe453eb93ca8"} Feb 27 17:08:57 crc kubenswrapper[4708]: I0227 17:08:57.815577 4708 generic.go:334] "Generic (PLEG): container finished" podID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerID="6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a" exitCode=0 Feb 27 17:08:57 crc kubenswrapper[4708]: I0227 17:08:57.815643 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pphmc" event={"ID":"2189486c-1c43-4445-b6d7-299b365ce2f5","Type":"ContainerDied","Data":"6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a"} Feb 27 17:08:58 crc kubenswrapper[4708]: I0227 17:08:58.827240 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pphmc" event={"ID":"2189486c-1c43-4445-b6d7-299b365ce2f5","Type":"ContainerStarted","Data":"54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca"} Feb 27 17:08:58 crc kubenswrapper[4708]: I0227 17:08:58.853332 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-pphmc" podStartSLOduration=2.425246628 podStartE2EDuration="3.853309296s" podCreationTimestamp="2026-02-27 17:08:55 +0000 UTC" firstStartedPulling="2026-02-27 17:08:56.807173601 +0000 UTC m=+935.322971228" lastFinishedPulling="2026-02-27 17:08:58.235236279 +0000 UTC m=+936.751033896" observedRunningTime="2026-02-27 17:08:58.84995582 +0000 UTC m=+937.365753437" watchObservedRunningTime="2026-02-27 17:08:58.853309296 +0000 UTC m=+937.369106923" Feb 27 17:09:05 crc kubenswrapper[4708]: I0227 17:09:05.631416 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:09:05 crc kubenswrapper[4708]: I0227 17:09:05.632337 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:09:06 crc kubenswrapper[4708]: I0227 17:09:06.150878 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:09:06 crc kubenswrapper[4708]: I0227 17:09:06.150976 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:09:06 crc kubenswrapper[4708]: I0227 17:09:06.222107 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:09:06 crc kubenswrapper[4708]: I0227 17:09:06.985059 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:09:07 crc kubenswrapper[4708]: I0227 17:09:07.043551 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pphmc"] Feb 27 17:09:08 crc kubenswrapper[4708]: I0227 17:09:08.902589 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-pphmc" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="registry-server" containerID="cri-o://54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca" gracePeriod=2 Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.325699 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9"] Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.327212 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.328891 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.338487 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9"] Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.374961 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.443738 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.443796 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.443888 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld8nx\" (UniqueName: \"kubernetes.io/projected/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-kube-api-access-ld8nx\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.544979 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-utilities\") pod \"2189486c-1c43-4445-b6d7-299b365ce2f5\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.545162 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g5pxb\" (UniqueName: \"kubernetes.io/projected/2189486c-1c43-4445-b6d7-299b365ce2f5-kube-api-access-g5pxb\") pod \"2189486c-1c43-4445-b6d7-299b365ce2f5\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.545213 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-catalog-content\") pod \"2189486c-1c43-4445-b6d7-299b365ce2f5\" (UID: \"2189486c-1c43-4445-b6d7-299b365ce2f5\") " Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.545555 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ld8nx\" (UniqueName: \"kubernetes.io/projected/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-kube-api-access-ld8nx\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.545785 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.545950 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.546364 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-util\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.546583 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-bundle\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.546748 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-utilities" (OuterVolumeSpecName: "utilities") pod "2189486c-1c43-4445-b6d7-299b365ce2f5" (UID: "2189486c-1c43-4445-b6d7-299b365ce2f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.557128 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2189486c-1c43-4445-b6d7-299b365ce2f5-kube-api-access-g5pxb" (OuterVolumeSpecName: "kube-api-access-g5pxb") pod "2189486c-1c43-4445-b6d7-299b365ce2f5" (UID: "2189486c-1c43-4445-b6d7-299b365ce2f5"). InnerVolumeSpecName "kube-api-access-g5pxb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.573975 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2189486c-1c43-4445-b6d7-299b365ce2f5" (UID: "2189486c-1c43-4445-b6d7-299b365ce2f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.579458 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ld8nx\" (UniqueName: \"kubernetes.io/projected/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-kube-api-access-ld8nx\") pod \"0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.647335 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.647589 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2189486c-1c43-4445-b6d7-299b365ce2f5-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.648074 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g5pxb\" (UniqueName: \"kubernetes.io/projected/2189486c-1c43-4445-b6d7-299b365ce2f5-kube-api-access-g5pxb\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.685968 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.916266 4708 generic.go:334] "Generic (PLEG): container finished" podID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerID="54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca" exitCode=0 Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.916445 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pphmc" event={"ID":"2189486c-1c43-4445-b6d7-299b365ce2f5","Type":"ContainerDied","Data":"54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca"} Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.916616 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-pphmc" event={"ID":"2189486c-1c43-4445-b6d7-299b365ce2f5","Type":"ContainerDied","Data":"70fd05f96f3a75327ad457074d15bd6a6184e77922b1c98cfa3abe453eb93ca8"} Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.916596 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-pphmc" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.916635 4708 scope.go:117] "RemoveContainer" containerID="54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.954513 4708 scope.go:117] "RemoveContainer" containerID="6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a" Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.956217 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-pphmc"] Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.960822 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-pphmc"] Feb 27 17:09:09 crc kubenswrapper[4708]: I0227 17:09:09.989819 4708 scope.go:117] "RemoveContainer" containerID="7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.010550 4708 scope.go:117] "RemoveContainer" containerID="54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca" Feb 27 17:09:10 crc kubenswrapper[4708]: E0227 17:09:10.010963 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca\": container with ID starting with 54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca not found: ID does not exist" containerID="54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.011005 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca"} err="failed to get container status \"54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca\": rpc error: code = NotFound desc = could not find container \"54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca\": container with ID starting with 54c5de3d283d094578c7a3514c04e4e0692bbe55961ee5dce5d045162f8d95ca not found: ID does not exist" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.011033 4708 scope.go:117] "RemoveContainer" containerID="6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a" Feb 27 17:09:10 crc kubenswrapper[4708]: E0227 17:09:10.011743 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a\": container with ID starting with 6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a not found: ID does not exist" containerID="6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.011800 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a"} err="failed to get container status \"6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a\": rpc error: code = NotFound desc = could not find container \"6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a\": container with ID starting with 6aaae8843250ef81dcbd6b86f1f879227a08cc42d76939ec4f017f087b2de97a not found: ID does not exist" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.011833 4708 scope.go:117] "RemoveContainer" containerID="7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e" Feb 27 17:09:10 crc kubenswrapper[4708]: E0227 17:09:10.012247 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e\": container with ID starting with 7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e not found: ID does not exist" containerID="7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.012307 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e"} err="failed to get container status \"7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e\": rpc error: code = NotFound desc = could not find container \"7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e\": container with ID starting with 7d97b645f3d2b08d04f22655460182e02e4b7b6ae075d48ad8d4c65c90db3f6e not found: ID does not exist" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.200393 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9"] Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.236227 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" path="/var/lib/kubelet/pods/2189486c-1c43-4445-b6d7-299b365ce2f5/volumes" Feb 27 17:09:10 crc kubenswrapper[4708]: I0227 17:09:10.928494 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" event={"ID":"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63","Type":"ContainerStarted","Data":"fe26a1f49fb6eb315933174f24b13b57d8a7aef413b67f26f9f6794be8625153"} Feb 27 17:09:11 crc kubenswrapper[4708]: I0227 17:09:11.939053 4708 generic.go:334] "Generic (PLEG): container finished" podID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerID="f07658bffc63e0ec429602365824870cd960538872e871e5dbba4c13b960c11c" exitCode=0 Feb 27 17:09:11 crc kubenswrapper[4708]: I0227 17:09:11.939129 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" event={"ID":"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63","Type":"ContainerDied","Data":"f07658bffc63e0ec429602365824870cd960538872e871e5dbba4c13b960c11c"} Feb 27 17:09:11 crc kubenswrapper[4708]: I0227 17:09:11.941634 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:09:13 crc kubenswrapper[4708]: I0227 17:09:13.966671 4708 generic.go:334] "Generic (PLEG): container finished" podID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerID="c75b0ffbfae663a1675b1664eb81e8c31d61c246ffbb1cfabace148c6b4956d2" exitCode=0 Feb 27 17:09:13 crc kubenswrapper[4708]: I0227 17:09:13.967284 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" event={"ID":"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63","Type":"ContainerDied","Data":"c75b0ffbfae663a1675b1664eb81e8c31d61c246ffbb1cfabace148c6b4956d2"} Feb 27 17:09:14 crc kubenswrapper[4708]: I0227 17:09:14.977737 4708 generic.go:334] "Generic (PLEG): container finished" podID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerID="a925a0dd9ffa3c94d1380d3ac03f2f14bee9f5b37d5d329bb2dcd350b7dc4505" exitCode=0 Feb 27 17:09:14 crc kubenswrapper[4708]: I0227 17:09:14.977797 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" event={"ID":"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63","Type":"ContainerDied","Data":"a925a0dd9ffa3c94d1380d3ac03f2f14bee9f5b37d5d329bb2dcd350b7dc4505"} Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.251011 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.345251 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-bundle\") pod \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.345356 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-util\") pod \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.345382 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ld8nx\" (UniqueName: \"kubernetes.io/projected/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-kube-api-access-ld8nx\") pod \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\" (UID: \"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63\") " Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.345735 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-bundle" (OuterVolumeSpecName: "bundle") pod "9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" (UID: "9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.351348 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-kube-api-access-ld8nx" (OuterVolumeSpecName: "kube-api-access-ld8nx") pod "9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" (UID: "9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63"). InnerVolumeSpecName "kube-api-access-ld8nx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.363904 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-util" (OuterVolumeSpecName: "util") pod "9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" (UID: "9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.446703 4708 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-util\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.446754 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ld8nx\" (UniqueName: \"kubernetes.io/projected/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-kube-api-access-ld8nx\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.446772 4708 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.997030 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" event={"ID":"9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63","Type":"ContainerDied","Data":"fe26a1f49fb6eb315933174f24b13b57d8a7aef413b67f26f9f6794be8625153"} Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.997647 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe26a1f49fb6eb315933174f24b13b57d8a7aef413b67f26f9f6794be8625153" Feb 27 17:09:16 crc kubenswrapper[4708]: I0227 17:09:16.997139 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.466426 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb"] Feb 27 17:09:19 crc kubenswrapper[4708]: E0227 17:09:19.466762 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerName="util" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.466782 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerName="util" Feb 27 17:09:19 crc kubenswrapper[4708]: E0227 17:09:19.466817 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerName="extract" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.466831 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerName="extract" Feb 27 17:09:19 crc kubenswrapper[4708]: E0227 17:09:19.466895 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="registry-server" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.466908 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="registry-server" Feb 27 17:09:19 crc kubenswrapper[4708]: E0227 17:09:19.466930 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="extract-content" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.466943 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="extract-content" Feb 27 17:09:19 crc kubenswrapper[4708]: E0227 17:09:19.466962 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="extract-utilities" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.466975 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="extract-utilities" Feb 27 17:09:19 crc kubenswrapper[4708]: E0227 17:09:19.467009 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerName="pull" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.467021 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerName="pull" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.467214 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63" containerName="extract" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.467241 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2189486c-1c43-4445-b6d7-299b365ce2f5" containerName="registry-server" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.467980 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.471497 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-zcdx8" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.473024 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.473511 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.487738 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb"] Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.616123 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsp76\" (UniqueName: \"kubernetes.io/projected/2b0a02a3-1871-4bf5-a292-e2bb406be9b1-kube-api-access-tsp76\") pod \"nmstate-operator-75c5dccd6c-2mqdb\" (UID: \"2b0a02a3-1871-4bf5-a292-e2bb406be9b1\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.718012 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsp76\" (UniqueName: \"kubernetes.io/projected/2b0a02a3-1871-4bf5-a292-e2bb406be9b1-kube-api-access-tsp76\") pod \"nmstate-operator-75c5dccd6c-2mqdb\" (UID: \"2b0a02a3-1871-4bf5-a292-e2bb406be9b1\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.741972 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsp76\" (UniqueName: \"kubernetes.io/projected/2b0a02a3-1871-4bf5-a292-e2bb406be9b1-kube-api-access-tsp76\") pod \"nmstate-operator-75c5dccd6c-2mqdb\" (UID: \"2b0a02a3-1871-4bf5-a292-e2bb406be9b1\") " pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" Feb 27 17:09:19 crc kubenswrapper[4708]: I0227 17:09:19.795170 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" Feb 27 17:09:20 crc kubenswrapper[4708]: I0227 17:09:20.399982 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb"] Feb 27 17:09:21 crc kubenswrapper[4708]: I0227 17:09:21.033087 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" event={"ID":"2b0a02a3-1871-4bf5-a292-e2bb406be9b1","Type":"ContainerStarted","Data":"c5566931b29bbe085f31ffdc01e681f54fcade8ebb703fedbbbf139766a222d8"} Feb 27 17:09:23 crc kubenswrapper[4708]: I0227 17:09:23.087886 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" event={"ID":"2b0a02a3-1871-4bf5-a292-e2bb406be9b1","Type":"ContainerStarted","Data":"6a2f9604cb42eeb0b5036cc3c9c42d4e9f7f6488477a7ce9bf33e65129327796"} Feb 27 17:09:23 crc kubenswrapper[4708]: I0227 17:09:23.121516 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-75c5dccd6c-2mqdb" podStartSLOduration=1.706595555 podStartE2EDuration="4.121493528s" podCreationTimestamp="2026-02-27 17:09:19 +0000 UTC" firstStartedPulling="2026-02-27 17:09:20.401873301 +0000 UTC m=+958.917670898" lastFinishedPulling="2026-02-27 17:09:22.816771274 +0000 UTC m=+961.332568871" observedRunningTime="2026-02-27 17:09:23.114311613 +0000 UTC m=+961.630109240" watchObservedRunningTime="2026-02-27 17:09:23.121493528 +0000 UTC m=+961.637291145" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.222372 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-mjd67"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.223987 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.226886 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-xxwqn" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.235794 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-4mk88"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.236614 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.239464 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.248041 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-mjd67"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.251420 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-4mk88"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.256577 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-q6bvp"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.257468 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.277742 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-dbus-socket\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.277806 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcg77\" (UniqueName: \"kubernetes.io/projected/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-kube-api-access-xcg77\") pod \"nmstate-webhook-786f45cff4-4mk88\" (UID: \"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.278081 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4mk88\" (UID: \"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.278790 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-nmstate-lock\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.354641 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.355350 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.357858 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.358034 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.358172 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-z9bfc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.372367 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381059 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-dbus-socket\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381111 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-dbus-socket\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381171 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-slnn8\" (UniqueName: \"kubernetes.io/projected/fdbd97e5-232b-4c09-b936-7258fc72a153-kube-api-access-slnn8\") pod \"nmstate-metrics-69594cc75-mjd67\" (UID: \"fdbd97e5-232b-4c09-b936-7258fc72a153\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381212 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcg77\" (UniqueName: \"kubernetes.io/projected/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-kube-api-access-xcg77\") pod \"nmstate-webhook-786f45cff4-4mk88\" (UID: \"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381234 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4mk88\" (UID: \"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381251 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp7tr\" (UniqueName: \"kubernetes.io/projected/13879980-37e2-49a9-a9ba-056ba7fb5698-kube-api-access-tp7tr\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381299 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-ovs-socket\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381316 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-nmstate-lock\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.381377 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-nmstate-lock\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: E0227 17:09:30.381761 4708 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Feb 27 17:09:30 crc kubenswrapper[4708]: E0227 17:09:30.381969 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-tls-key-pair podName:6c61d3bb-a5e6-4206-a47a-9d6fcba04da4 nodeName:}" failed. No retries permitted until 2026-02-27 17:09:30.881951098 +0000 UTC m=+969.397748685 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-tls-key-pair") pod "nmstate-webhook-786f45cff4-4mk88" (UID: "6c61d3bb-a5e6-4206-a47a-9d6fcba04da4") : secret "openshift-nmstate-webhook" not found Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.416504 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcg77\" (UniqueName: \"kubernetes.io/projected/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-kube-api-access-xcg77\") pod \"nmstate-webhook-786f45cff4-4mk88\" (UID: \"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.482500 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-slnn8\" (UniqueName: \"kubernetes.io/projected/fdbd97e5-232b-4c09-b936-7258fc72a153-kube-api-access-slnn8\") pod \"nmstate-metrics-69594cc75-mjd67\" (UID: \"fdbd97e5-232b-4c09-b936-7258fc72a153\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.482696 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp7tr\" (UniqueName: \"kubernetes.io/projected/13879980-37e2-49a9-a9ba-056ba7fb5698-kube-api-access-tp7tr\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.482795 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjcmm\" (UniqueName: \"kubernetes.io/projected/d0f73455-2b50-4d77-8943-a75587af8b9d-kube-api-access-bjcmm\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.482954 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-ovs-socket\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.483080 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d0f73455-2b50-4d77-8943-a75587af8b9d-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.483193 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0f73455-2b50-4d77-8943-a75587af8b9d-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.483202 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/13879980-37e2-49a9-a9ba-056ba7fb5698-ovs-socket\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.498817 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp7tr\" (UniqueName: \"kubernetes.io/projected/13879980-37e2-49a9-a9ba-056ba7fb5698-kube-api-access-tp7tr\") pod \"nmstate-handler-q6bvp\" (UID: \"13879980-37e2-49a9-a9ba-056ba7fb5698\") " pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.501095 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-slnn8\" (UniqueName: \"kubernetes.io/projected/fdbd97e5-232b-4c09-b936-7258fc72a153-kube-api-access-slnn8\") pod \"nmstate-metrics-69594cc75-mjd67\" (UID: \"fdbd97e5-232b-4c09-b936-7258fc72a153\") " pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.537835 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5484948578-zng2t"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.538502 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.543737 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.556075 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5484948578-zng2t"] Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.568824 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.584617 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-console-config\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.584661 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-trusted-ca-bundle\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.584678 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-oauth-serving-cert\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.584711 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d0f73455-2b50-4d77-8943-a75587af8b9d-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.584733 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0f73455-2b50-4d77-8943-a75587af8b9d-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.584763 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-service-ca\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.584779 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7f56970-aa3a-452d-8995-e455102f70e3-console-oauth-config\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.585266 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmnzr\" (UniqueName: \"kubernetes.io/projected/b7f56970-aa3a-452d-8995-e455102f70e3-kube-api-access-jmnzr\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.585388 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjcmm\" (UniqueName: \"kubernetes.io/projected/d0f73455-2b50-4d77-8943-a75587af8b9d-kube-api-access-bjcmm\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.585469 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f56970-aa3a-452d-8995-e455102f70e3-console-serving-cert\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.585642 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/d0f73455-2b50-4d77-8943-a75587af8b9d-nginx-conf\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.587738 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/d0f73455-2b50-4d77-8943-a75587af8b9d-plugin-serving-cert\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.605176 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjcmm\" (UniqueName: \"kubernetes.io/projected/d0f73455-2b50-4d77-8943-a75587af8b9d-kube-api-access-bjcmm\") pod \"nmstate-console-plugin-5dcbbd79cf-vhqgc\" (UID: \"d0f73455-2b50-4d77-8943-a75587af8b9d\") " pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.671754 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.685991 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-service-ca\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.686215 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7f56970-aa3a-452d-8995-e455102f70e3-console-oauth-config\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.686256 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jmnzr\" (UniqueName: \"kubernetes.io/projected/b7f56970-aa3a-452d-8995-e455102f70e3-kube-api-access-jmnzr\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.686298 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f56970-aa3a-452d-8995-e455102f70e3-console-serving-cert\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.686330 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-console-config\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.686353 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-trusted-ca-bundle\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.686373 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-oauth-serving-cert\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.687057 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-service-ca\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.687589 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-oauth-serving-cert\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.687968 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-console-config\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.688310 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b7f56970-aa3a-452d-8995-e455102f70e3-trusted-ca-bundle\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.690663 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b7f56970-aa3a-452d-8995-e455102f70e3-console-oauth-config\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.691108 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b7f56970-aa3a-452d-8995-e455102f70e3-console-serving-cert\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.717601 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jmnzr\" (UniqueName: \"kubernetes.io/projected/b7f56970-aa3a-452d-8995-e455102f70e3-kube-api-access-jmnzr\") pod \"console-5484948578-zng2t\" (UID: \"b7f56970-aa3a-452d-8995-e455102f70e3\") " pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.755836 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-69594cc75-mjd67"] Feb 27 17:09:30 crc kubenswrapper[4708]: W0227 17:09:30.757383 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfdbd97e5_232b_4c09_b936_7258fc72a153.slice/crio-8c0ad6d629cfa2316d57623bf5ff7993aa43d4423fd15dcf66de83bd98afb3d7 WatchSource:0}: Error finding container 8c0ad6d629cfa2316d57623bf5ff7993aa43d4423fd15dcf66de83bd98afb3d7: Status 404 returned error can't find the container with id 8c0ad6d629cfa2316d57623bf5ff7993aa43d4423fd15dcf66de83bd98afb3d7 Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.852433 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.888298 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4mk88\" (UID: \"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.891497 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/6c61d3bb-a5e6-4206-a47a-9d6fcba04da4-tls-key-pair\") pod \"nmstate-webhook-786f45cff4-4mk88\" (UID: \"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4\") " pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:30 crc kubenswrapper[4708]: I0227 17:09:30.891553 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc"] Feb 27 17:09:30 crc kubenswrapper[4708]: W0227 17:09:30.894708 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0f73455_2b50_4d77_8943_a75587af8b9d.slice/crio-a600fb6653c8704ba176e3740d46a1d98bb13ffcb5df81ff159610dede802cb4 WatchSource:0}: Error finding container a600fb6653c8704ba176e3740d46a1d98bb13ffcb5df81ff159610dede802cb4: Status 404 returned error can't find the container with id a600fb6653c8704ba176e3740d46a1d98bb13ffcb5df81ff159610dede802cb4 Feb 27 17:09:31 crc kubenswrapper[4708]: I0227 17:09:31.098935 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5484948578-zng2t"] Feb 27 17:09:31 crc kubenswrapper[4708]: W0227 17:09:31.114037 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb7f56970_aa3a_452d_8995_e455102f70e3.slice/crio-826dd273b64f832e6c683ac12e2bc0e2fcf067cf1c4d0b00775d7b9c52afadbc WatchSource:0}: Error finding container 826dd273b64f832e6c683ac12e2bc0e2fcf067cf1c4d0b00775d7b9c52afadbc: Status 404 returned error can't find the container with id 826dd273b64f832e6c683ac12e2bc0e2fcf067cf1c4d0b00775d7b9c52afadbc Feb 27 17:09:31 crc kubenswrapper[4708]: I0227 17:09:31.155203 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:31 crc kubenswrapper[4708]: I0227 17:09:31.168233 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q6bvp" event={"ID":"13879980-37e2-49a9-a9ba-056ba7fb5698","Type":"ContainerStarted","Data":"d42159ee62c7532f9d644c9a1365731db59532ca3e6d01d5562400e057b56a75"} Feb 27 17:09:31 crc kubenswrapper[4708]: I0227 17:09:31.170233 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5484948578-zng2t" event={"ID":"b7f56970-aa3a-452d-8995-e455102f70e3","Type":"ContainerStarted","Data":"826dd273b64f832e6c683ac12e2bc0e2fcf067cf1c4d0b00775d7b9c52afadbc"} Feb 27 17:09:31 crc kubenswrapper[4708]: I0227 17:09:31.171270 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" event={"ID":"d0f73455-2b50-4d77-8943-a75587af8b9d","Type":"ContainerStarted","Data":"a600fb6653c8704ba176e3740d46a1d98bb13ffcb5df81ff159610dede802cb4"} Feb 27 17:09:31 crc kubenswrapper[4708]: I0227 17:09:31.171959 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" event={"ID":"fdbd97e5-232b-4c09-b936-7258fc72a153","Type":"ContainerStarted","Data":"8c0ad6d629cfa2316d57623bf5ff7993aa43d4423fd15dcf66de83bd98afb3d7"} Feb 27 17:09:31 crc kubenswrapper[4708]: I0227 17:09:31.350880 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-786f45cff4-4mk88"] Feb 27 17:09:31 crc kubenswrapper[4708]: W0227 17:09:31.362628 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c61d3bb_a5e6_4206_a47a_9d6fcba04da4.slice/crio-464cb516d93a527e23a1d81290a6c7d6813f67615e2ff50d17d019b7250942d6 WatchSource:0}: Error finding container 464cb516d93a527e23a1d81290a6c7d6813f67615e2ff50d17d019b7250942d6: Status 404 returned error can't find the container with id 464cb516d93a527e23a1d81290a6c7d6813f67615e2ff50d17d019b7250942d6 Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.180646 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5484948578-zng2t" event={"ID":"b7f56970-aa3a-452d-8995-e455102f70e3","Type":"ContainerStarted","Data":"1c92c084354c368cd69bddb34748a2d965d009b0784595eda1878dfb4394e02e"} Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.185430 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" event={"ID":"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4","Type":"ContainerStarted","Data":"464cb516d93a527e23a1d81290a6c7d6813f67615e2ff50d17d019b7250942d6"} Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.210345 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5484948578-zng2t" podStartSLOduration=2.210323558 podStartE2EDuration="2.210323558s" podCreationTimestamp="2026-02-27 17:09:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:09:32.203802462 +0000 UTC m=+970.719600049" watchObservedRunningTime="2026-02-27 17:09:32.210323558 +0000 UTC m=+970.726121155" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.787745 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8dlr7"] Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.789116 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.802912 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dlr7"] Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.825055 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qtc2\" (UniqueName: \"kubernetes.io/projected/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-kube-api-access-5qtc2\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.825517 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-utilities\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.825658 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-catalog-content\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.927076 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-catalog-content\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.927200 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qtc2\" (UniqueName: \"kubernetes.io/projected/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-kube-api-access-5qtc2\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.927268 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-utilities\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.927877 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-catalog-content\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.928001 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-utilities\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:32 crc kubenswrapper[4708]: I0227 17:09:32.956910 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qtc2\" (UniqueName: \"kubernetes.io/projected/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-kube-api-access-5qtc2\") pod \"certified-operators-8dlr7\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:33 crc kubenswrapper[4708]: I0227 17:09:33.115828 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.017734 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8dlr7"] Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.207381 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" event={"ID":"fdbd97e5-232b-4c09-b936-7258fc72a153","Type":"ContainerStarted","Data":"fdf1033db3ad15bbd4f413ce10829f982b47a430f393b90a480b14e1ae92714e"} Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.209239 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-q6bvp" event={"ID":"13879980-37e2-49a9-a9ba-056ba7fb5698","Type":"ContainerStarted","Data":"7e05a2418819e9dd6c0b363bea8fdbeed8908da250917113ba26155d4d63c208"} Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.209388 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.214037 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" event={"ID":"6c61d3bb-a5e6-4206-a47a-9d6fcba04da4","Type":"ContainerStarted","Data":"f8c624adbec3447e7dd661c0899459e04834d43c0cdbc2d6a92fafe14252cbc6"} Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.214183 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.216193 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" event={"ID":"d0f73455-2b50-4d77-8943-a75587af8b9d","Type":"ContainerStarted","Data":"752a4b442bf472211cbf48455093c21ac85b7a9c7be43d94b8dcfac862a39092"} Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.217506 4708 generic.go:334] "Generic (PLEG): container finished" podID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerID="2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01" exitCode=0 Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.217541 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dlr7" event={"ID":"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23","Type":"ContainerDied","Data":"2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01"} Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.217562 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dlr7" event={"ID":"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23","Type":"ContainerStarted","Data":"b702a744d621460d781d356d8c4c2306b83a2e26bce33ce77b299058249923be"} Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.225290 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-q6bvp" podStartSLOduration=0.97956677 podStartE2EDuration="5.225275275s" podCreationTimestamp="2026-02-27 17:09:30 +0000 UTC" firstStartedPulling="2026-02-27 17:09:30.60202766 +0000 UTC m=+969.117825247" lastFinishedPulling="2026-02-27 17:09:34.847736155 +0000 UTC m=+973.363533752" observedRunningTime="2026-02-27 17:09:35.224679129 +0000 UTC m=+973.740476716" watchObservedRunningTime="2026-02-27 17:09:35.225275275 +0000 UTC m=+973.741072862" Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.253516 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" podStartSLOduration=1.8140006579999999 podStartE2EDuration="5.253499065s" podCreationTimestamp="2026-02-27 17:09:30 +0000 UTC" firstStartedPulling="2026-02-27 17:09:31.366266171 +0000 UTC m=+969.882063758" lastFinishedPulling="2026-02-27 17:09:34.805764568 +0000 UTC m=+973.321562165" observedRunningTime="2026-02-27 17:09:35.252450179 +0000 UTC m=+973.768247766" watchObservedRunningTime="2026-02-27 17:09:35.253499065 +0000 UTC m=+973.769296652" Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.275149 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-5dcbbd79cf-vhqgc" podStartSLOduration=1.366620312 podStartE2EDuration="5.27512694s" podCreationTimestamp="2026-02-27 17:09:30 +0000 UTC" firstStartedPulling="2026-02-27 17:09:30.896532762 +0000 UTC m=+969.412330349" lastFinishedPulling="2026-02-27 17:09:34.80503938 +0000 UTC m=+973.320836977" observedRunningTime="2026-02-27 17:09:35.271579641 +0000 UTC m=+973.787377228" watchObservedRunningTime="2026-02-27 17:09:35.27512694 +0000 UTC m=+973.790924517" Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.632110 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:09:35 crc kubenswrapper[4708]: I0227 17:09:35.632543 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:09:36 crc kubenswrapper[4708]: I0227 17:09:36.236986 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dlr7" event={"ID":"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23","Type":"ContainerStarted","Data":"9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5"} Feb 27 17:09:37 crc kubenswrapper[4708]: I0227 17:09:37.236175 4708 generic.go:334] "Generic (PLEG): container finished" podID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerID="9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5" exitCode=0 Feb 27 17:09:37 crc kubenswrapper[4708]: I0227 17:09:37.236452 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dlr7" event={"ID":"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23","Type":"ContainerDied","Data":"9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5"} Feb 27 17:09:38 crc kubenswrapper[4708]: I0227 17:09:38.248018 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" event={"ID":"fdbd97e5-232b-4c09-b936-7258fc72a153","Type":"ContainerStarted","Data":"d0a195d6bc063553c4d13c9d6f91ea86b3d26a186d29b29facf3cc041263bb83"} Feb 27 17:09:39 crc kubenswrapper[4708]: I0227 17:09:39.259938 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dlr7" event={"ID":"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23","Type":"ContainerStarted","Data":"660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38"} Feb 27 17:09:39 crc kubenswrapper[4708]: I0227 17:09:39.285709 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8dlr7" podStartSLOduration=4.336130964 podStartE2EDuration="7.285690314s" podCreationTimestamp="2026-02-27 17:09:32 +0000 UTC" firstStartedPulling="2026-02-27 17:09:35.218691539 +0000 UTC m=+973.734489126" lastFinishedPulling="2026-02-27 17:09:38.168250889 +0000 UTC m=+976.684048476" observedRunningTime="2026-02-27 17:09:39.282596366 +0000 UTC m=+977.798393983" watchObservedRunningTime="2026-02-27 17:09:39.285690314 +0000 UTC m=+977.801487911" Feb 27 17:09:39 crc kubenswrapper[4708]: I0227 17:09:39.285912 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-69594cc75-mjd67" podStartSLOduration=2.263769866 podStartE2EDuration="9.285905329s" podCreationTimestamp="2026-02-27 17:09:30 +0000 UTC" firstStartedPulling="2026-02-27 17:09:30.759497528 +0000 UTC m=+969.275295115" lastFinishedPulling="2026-02-27 17:09:37.781632991 +0000 UTC m=+976.297430578" observedRunningTime="2026-02-27 17:09:38.291072382 +0000 UTC m=+976.806869999" watchObservedRunningTime="2026-02-27 17:09:39.285905329 +0000 UTC m=+977.801702926" Feb 27 17:09:40 crc kubenswrapper[4708]: I0227 17:09:40.602835 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-q6bvp" Feb 27 17:09:40 crc kubenswrapper[4708]: I0227 17:09:40.853760 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:40 crc kubenswrapper[4708]: I0227 17:09:40.853816 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:40 crc kubenswrapper[4708]: I0227 17:09:40.862786 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:41 crc kubenswrapper[4708]: I0227 17:09:41.280434 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5484948578-zng2t" Feb 27 17:09:41 crc kubenswrapper[4708]: I0227 17:09:41.348345 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cl8l9"] Feb 27 17:09:43 crc kubenswrapper[4708]: I0227 17:09:43.117114 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:43 crc kubenswrapper[4708]: I0227 17:09:43.117508 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:43 crc kubenswrapper[4708]: I0227 17:09:43.189991 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:43 crc kubenswrapper[4708]: I0227 17:09:43.370363 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:43 crc kubenswrapper[4708]: I0227 17:09:43.459107 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8dlr7"] Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.300334 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8dlr7" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="registry-server" containerID="cri-o://660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38" gracePeriod=2 Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.742914 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.786802 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-catalog-content\") pod \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.786933 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qtc2\" (UniqueName: \"kubernetes.io/projected/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-kube-api-access-5qtc2\") pod \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.786997 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-utilities\") pod \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\" (UID: \"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23\") " Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.788300 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-utilities" (OuterVolumeSpecName: "utilities") pod "9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" (UID: "9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.810318 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-kube-api-access-5qtc2" (OuterVolumeSpecName: "kube-api-access-5qtc2") pod "9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" (UID: "9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23"). InnerVolumeSpecName "kube-api-access-5qtc2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.872958 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" (UID: "9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.889000 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.889025 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qtc2\" (UniqueName: \"kubernetes.io/projected/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-kube-api-access-5qtc2\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:45 crc kubenswrapper[4708]: I0227 17:09:45.889036 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.322638 4708 generic.go:334] "Generic (PLEG): container finished" podID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerID="660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38" exitCode=0 Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.322691 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dlr7" event={"ID":"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23","Type":"ContainerDied","Data":"660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38"} Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.322770 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8dlr7" event={"ID":"9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23","Type":"ContainerDied","Data":"b702a744d621460d781d356d8c4c2306b83a2e26bce33ce77b299058249923be"} Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.322795 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8dlr7" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.322814 4708 scope.go:117] "RemoveContainer" containerID="660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.377351 4708 scope.go:117] "RemoveContainer" containerID="9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.383674 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8dlr7"] Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.388869 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8dlr7"] Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.407752 4708 scope.go:117] "RemoveContainer" containerID="2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.429667 4708 scope.go:117] "RemoveContainer" containerID="660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38" Feb 27 17:09:46 crc kubenswrapper[4708]: E0227 17:09:46.431037 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38\": container with ID starting with 660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38 not found: ID does not exist" containerID="660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.431089 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38"} err="failed to get container status \"660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38\": rpc error: code = NotFound desc = could not find container \"660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38\": container with ID starting with 660ac5fbc3f8385542b1927f3c04965a0fb2a7e855905a42ed11bde378ce2c38 not found: ID does not exist" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.431127 4708 scope.go:117] "RemoveContainer" containerID="9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5" Feb 27 17:09:46 crc kubenswrapper[4708]: E0227 17:09:46.431498 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5\": container with ID starting with 9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5 not found: ID does not exist" containerID="9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.431538 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5"} err="failed to get container status \"9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5\": rpc error: code = NotFound desc = could not find container \"9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5\": container with ID starting with 9b911e118cca60c0378fd15ea793d95e898da6212b5b3fbacef8608a870cd4c5 not found: ID does not exist" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.431566 4708 scope.go:117] "RemoveContainer" containerID="2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01" Feb 27 17:09:46 crc kubenswrapper[4708]: E0227 17:09:46.431896 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01\": container with ID starting with 2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01 not found: ID does not exist" containerID="2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01" Feb 27 17:09:46 crc kubenswrapper[4708]: I0227 17:09:46.431954 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01"} err="failed to get container status \"2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01\": rpc error: code = NotFound desc = could not find container \"2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01\": container with ID starting with 2ea1c6c34635d2b9123774b0cfd6e523d07b3c300fda397acbf65ef70b488f01 not found: ID does not exist" Feb 27 17:09:48 crc kubenswrapper[4708]: I0227 17:09:48.239411 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" path="/var/lib/kubelet/pods/9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23/volumes" Feb 27 17:09:51 crc kubenswrapper[4708]: I0227 17:09:51.164179 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.142624 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536870-zwf6n"] Feb 27 17:10:00 crc kubenswrapper[4708]: E0227 17:10:00.143308 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="extract-content" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.143321 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="extract-content" Feb 27 17:10:00 crc kubenswrapper[4708]: E0227 17:10:00.143334 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="registry-server" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.143340 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="registry-server" Feb 27 17:10:00 crc kubenswrapper[4708]: E0227 17:10:00.143354 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="extract-utilities" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.143361 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="extract-utilities" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.143455 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c2e97ed-9a09-4ceb-bc7f-7b8b16d98b23" containerName="registry-server" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.143857 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-zwf6n" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.147284 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.147378 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.148410 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.159578 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-zwf6n"] Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.191388 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4mrc\" (UniqueName: \"kubernetes.io/projected/472fdd57-63d6-48e4-90b4-ea859313d030-kube-api-access-t4mrc\") pod \"auto-csr-approver-29536870-zwf6n\" (UID: \"472fdd57-63d6-48e4-90b4-ea859313d030\") " pod="openshift-infra/auto-csr-approver-29536870-zwf6n" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.257700 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-tt4mx"] Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.259944 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.282185 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tt4mx"] Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.293748 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-catalog-content\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.293811 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-utilities\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.293992 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvz7t\" (UniqueName: \"kubernetes.io/projected/96443bed-22ed-4a2d-ae78-0ebf259f25e3-kube-api-access-zvz7t\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.294041 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4mrc\" (UniqueName: \"kubernetes.io/projected/472fdd57-63d6-48e4-90b4-ea859313d030-kube-api-access-t4mrc\") pod \"auto-csr-approver-29536870-zwf6n\" (UID: \"472fdd57-63d6-48e4-90b4-ea859313d030\") " pod="openshift-infra/auto-csr-approver-29536870-zwf6n" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.338920 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4mrc\" (UniqueName: \"kubernetes.io/projected/472fdd57-63d6-48e4-90b4-ea859313d030-kube-api-access-t4mrc\") pod \"auto-csr-approver-29536870-zwf6n\" (UID: \"472fdd57-63d6-48e4-90b4-ea859313d030\") " pod="openshift-infra/auto-csr-approver-29536870-zwf6n" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.395032 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvz7t\" (UniqueName: \"kubernetes.io/projected/96443bed-22ed-4a2d-ae78-0ebf259f25e3-kube-api-access-zvz7t\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.395106 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-catalog-content\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.395130 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-utilities\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.396010 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-utilities\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.396262 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-catalog-content\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.417048 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvz7t\" (UniqueName: \"kubernetes.io/projected/96443bed-22ed-4a2d-ae78-0ebf259f25e3-kube-api-access-zvz7t\") pod \"community-operators-tt4mx\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.468633 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-zwf6n" Feb 27 17:10:00 crc kubenswrapper[4708]: I0227 17:10:00.585167 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:01 crc kubenswrapper[4708]: I0227 17:10:01.182633 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-zwf6n"] Feb 27 17:10:01 crc kubenswrapper[4708]: I0227 17:10:01.266456 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-tt4mx"] Feb 27 17:10:01 crc kubenswrapper[4708]: I0227 17:10:01.426453 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt4mx" event={"ID":"96443bed-22ed-4a2d-ae78-0ebf259f25e3","Type":"ContainerStarted","Data":"3c2a0478018e361d6a75b56b9f2ee938a82c153057dcca52675b3484200f9954"} Feb 27 17:10:01 crc kubenswrapper[4708]: I0227 17:10:01.427446 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536870-zwf6n" event={"ID":"472fdd57-63d6-48e4-90b4-ea859313d030","Type":"ContainerStarted","Data":"e07d0ca24ac16188472f858d70558935b8e4d3e94615b1e40b0e08c0cc83f64f"} Feb 27 17:10:02 crc kubenswrapper[4708]: I0227 17:10:02.436225 4708 generic.go:334] "Generic (PLEG): container finished" podID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerID="cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099" exitCode=0 Feb 27 17:10:02 crc kubenswrapper[4708]: I0227 17:10:02.436278 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt4mx" event={"ID":"96443bed-22ed-4a2d-ae78-0ebf259f25e3","Type":"ContainerDied","Data":"cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099"} Feb 27 17:10:03 crc kubenswrapper[4708]: I0227 17:10:03.443392 4708 generic.go:334] "Generic (PLEG): container finished" podID="472fdd57-63d6-48e4-90b4-ea859313d030" containerID="2e1d1d6696a81e89844f170efb76497881717839f08def2e50b3d046e8135816" exitCode=0 Feb 27 17:10:03 crc kubenswrapper[4708]: I0227 17:10:03.443584 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536870-zwf6n" event={"ID":"472fdd57-63d6-48e4-90b4-ea859313d030","Type":"ContainerDied","Data":"2e1d1d6696a81e89844f170efb76497881717839f08def2e50b3d046e8135816"} Feb 27 17:10:04 crc kubenswrapper[4708]: I0227 17:10:04.452042 4708 generic.go:334] "Generic (PLEG): container finished" podID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerID="67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55" exitCode=0 Feb 27 17:10:04 crc kubenswrapper[4708]: I0227 17:10:04.452130 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt4mx" event={"ID":"96443bed-22ed-4a2d-ae78-0ebf259f25e3","Type":"ContainerDied","Data":"67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55"} Feb 27 17:10:04 crc kubenswrapper[4708]: I0227 17:10:04.791970 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-zwf6n" Feb 27 17:10:04 crc kubenswrapper[4708]: I0227 17:10:04.900735 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4mrc\" (UniqueName: \"kubernetes.io/projected/472fdd57-63d6-48e4-90b4-ea859313d030-kube-api-access-t4mrc\") pod \"472fdd57-63d6-48e4-90b4-ea859313d030\" (UID: \"472fdd57-63d6-48e4-90b4-ea859313d030\") " Feb 27 17:10:04 crc kubenswrapper[4708]: I0227 17:10:04.919192 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/472fdd57-63d6-48e4-90b4-ea859313d030-kube-api-access-t4mrc" (OuterVolumeSpecName: "kube-api-access-t4mrc") pod "472fdd57-63d6-48e4-90b4-ea859313d030" (UID: "472fdd57-63d6-48e4-90b4-ea859313d030"). InnerVolumeSpecName "kube-api-access-t4mrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.002468 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4mrc\" (UniqueName: \"kubernetes.io/projected/472fdd57-63d6-48e4-90b4-ea859313d030-kube-api-access-t4mrc\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.463015 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536870-zwf6n" event={"ID":"472fdd57-63d6-48e4-90b4-ea859313d030","Type":"ContainerDied","Data":"e07d0ca24ac16188472f858d70558935b8e4d3e94615b1e40b0e08c0cc83f64f"} Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.463187 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e07d0ca24ac16188472f858d70558935b8e4d3e94615b1e40b0e08c0cc83f64f" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.463236 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536870-zwf6n" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.485107 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt4mx" event={"ID":"96443bed-22ed-4a2d-ae78-0ebf259f25e3","Type":"ContainerStarted","Data":"9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a"} Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.513282 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-tt4mx" podStartSLOduration=3.072569119 podStartE2EDuration="5.513259072s" podCreationTimestamp="2026-02-27 17:10:00 +0000 UTC" firstStartedPulling="2026-02-27 17:10:02.43814493 +0000 UTC m=+1000.953942517" lastFinishedPulling="2026-02-27 17:10:04.878834863 +0000 UTC m=+1003.394632470" observedRunningTime="2026-02-27 17:10:05.508670817 +0000 UTC m=+1004.024468414" watchObservedRunningTime="2026-02-27 17:10:05.513259072 +0000 UTC m=+1004.029056689" Feb 27 17:10:05 crc kubenswrapper[4708]: E0227 17:10:05.596021 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod472fdd57_63d6_48e4_90b4_ea859313d030.slice/crio-e07d0ca24ac16188472f858d70558935b8e4d3e94615b1e40b0e08c0cc83f64f\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod472fdd57_63d6_48e4_90b4_ea859313d030.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.631909 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.632087 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.632126 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.633066 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b93b6ea88dbf15ec38dc361eee21fbc69cdb9df7c63344796e2852a98085a90"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.633114 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://1b93b6ea88dbf15ec38dc361eee21fbc69cdb9df7c63344796e2852a98085a90" gracePeriod=600 Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.852287 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-ztd9p"] Feb 27 17:10:05 crc kubenswrapper[4708]: I0227 17:10:05.856433 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536864-ztd9p"] Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.236378 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef0f4977-e298-40f1-8d1d-23ebf0111f9f" path="/var/lib/kubelet/pods/ef0f4977-e298-40f1-8d1d-23ebf0111f9f/volumes" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.401061 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-cl8l9" podUID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" containerName="console" containerID="cri-o://431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e" gracePeriod=15 Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.493037 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="1b93b6ea88dbf15ec38dc361eee21fbc69cdb9df7c63344796e2852a98085a90" exitCode=0 Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.494051 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"1b93b6ea88dbf15ec38dc361eee21fbc69cdb9df7c63344796e2852a98085a90"} Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.494075 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"39dbd7797d34062ee99cfd72758adf14eea4f4680611bae0c80a2a4882b14a2d"} Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.494092 4708 scope.go:117] "RemoveContainer" containerID="73433f85f32a02d199ead494dd30f304e263e12a457d51cac8315ed1c3121a5b" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.796895 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cl8l9_bd7c826a-ca70-4d4f-90ca-96f0b72c173a/console/0.log" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.796954 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.938424 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-config\") pod \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.938474 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-oauth-serving-cert\") pod \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.938499 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-serving-cert\") pod \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.938514 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-oauth-config\") pod \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.938551 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-service-ca\") pod \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.938589 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-kube-api-access-nqcks\") pod \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.938625 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-trusted-ca-bundle\") pod \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\" (UID: \"bd7c826a-ca70-4d4f-90ca-96f0b72c173a\") " Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.939359 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bd7c826a-ca70-4d4f-90ca-96f0b72c173a" (UID: "bd7c826a-ca70-4d4f-90ca-96f0b72c173a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.939392 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-config" (OuterVolumeSpecName: "console-config") pod "bd7c826a-ca70-4d4f-90ca-96f0b72c173a" (UID: "bd7c826a-ca70-4d4f-90ca-96f0b72c173a"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.939406 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bd7c826a-ca70-4d4f-90ca-96f0b72c173a" (UID: "bd7c826a-ca70-4d4f-90ca-96f0b72c173a"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.939818 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-service-ca" (OuterVolumeSpecName: "service-ca") pod "bd7c826a-ca70-4d4f-90ca-96f0b72c173a" (UID: "bd7c826a-ca70-4d4f-90ca-96f0b72c173a"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.945360 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bd7c826a-ca70-4d4f-90ca-96f0b72c173a" (UID: "bd7c826a-ca70-4d4f-90ca-96f0b72c173a"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.959161 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bd7c826a-ca70-4d4f-90ca-96f0b72c173a" (UID: "bd7c826a-ca70-4d4f-90ca-96f0b72c173a"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:10:06 crc kubenswrapper[4708]: I0227 17:10:06.970101 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-kube-api-access-nqcks" (OuterVolumeSpecName: "kube-api-access-nqcks") pod "bd7c826a-ca70-4d4f-90ca-96f0b72c173a" (UID: "bd7c826a-ca70-4d4f-90ca-96f0b72c173a"). InnerVolumeSpecName "kube-api-access-nqcks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.039812 4708 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.039864 4708 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.039878 4708 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.039891 4708 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-serving-cert\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.039903 4708 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-console-oauth-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.039914 4708 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-service-ca\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.039925 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqcks\" (UniqueName: \"kubernetes.io/projected/bd7c826a-ca70-4d4f-90ca-96f0b72c173a-kube-api-access-nqcks\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.502004 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-cl8l9_bd7c826a-ca70-4d4f-90ca-96f0b72c173a/console/0.log" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.502261 4708 generic.go:334] "Generic (PLEG): container finished" podID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" containerID="431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e" exitCode=2 Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.502285 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cl8l9" event={"ID":"bd7c826a-ca70-4d4f-90ca-96f0b72c173a","Type":"ContainerDied","Data":"431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e"} Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.502306 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-cl8l9" event={"ID":"bd7c826a-ca70-4d4f-90ca-96f0b72c173a","Type":"ContainerDied","Data":"738361c2651fbe219fc21eeca0a247a3e39bc9bf378f5d8fd9cd42cf55eedda0"} Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.502323 4708 scope.go:117] "RemoveContainer" containerID="431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.502392 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-cl8l9" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.533040 4708 scope.go:117] "RemoveContainer" containerID="431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e" Feb 27 17:10:07 crc kubenswrapper[4708]: E0227 17:10:07.533767 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e\": container with ID starting with 431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e not found: ID does not exist" containerID="431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.533811 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e"} err="failed to get container status \"431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e\": rpc error: code = NotFound desc = could not find container \"431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e\": container with ID starting with 431d59f3f252df252cba4556b36207794b0b3e2e50a1549ead2623aec14b184e not found: ID does not exist" Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.540529 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-cl8l9"] Feb 27 17:10:07 crc kubenswrapper[4708]: I0227 17:10:07.545994 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-cl8l9"] Feb 27 17:10:08 crc kubenswrapper[4708]: I0227 17:10:08.237148 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" path="/var/lib/kubelet/pods/bd7c826a-ca70-4d4f-90ca-96f0b72c173a/volumes" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.008806 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh"] Feb 27 17:10:09 crc kubenswrapper[4708]: E0227 17:10:09.009349 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="472fdd57-63d6-48e4-90b4-ea859313d030" containerName="oc" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.009370 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="472fdd57-63d6-48e4-90b4-ea859313d030" containerName="oc" Feb 27 17:10:09 crc kubenswrapper[4708]: E0227 17:10:09.009391 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" containerName="console" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.009407 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" containerName="console" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.009598 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="472fdd57-63d6-48e4-90b4-ea859313d030" containerName="oc" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.009621 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd7c826a-ca70-4d4f-90ca-96f0b72c173a" containerName="console" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.011008 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.013775 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.018226 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh"] Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.065598 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l997r\" (UniqueName: \"kubernetes.io/projected/950927f1-3a77-4b7d-bec6-c669d6c60496-kube-api-access-l997r\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.065759 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.065837 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.167493 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.167575 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.167621 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l997r\" (UniqueName: \"kubernetes.io/projected/950927f1-3a77-4b7d-bec6-c669d6c60496-kube-api-access-l997r\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.168472 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-bundle\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.168520 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-util\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.196749 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l997r\" (UniqueName: \"kubernetes.io/projected/950927f1-3a77-4b7d-bec6-c669d6c60496-kube-api-access-l997r\") pod \"d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.345488 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:09 crc kubenswrapper[4708]: I0227 17:10:09.854977 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh"] Feb 27 17:10:09 crc kubenswrapper[4708]: W0227 17:10:09.859841 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950927f1_3a77_4b7d_bec6_c669d6c60496.slice/crio-5d612f0474fd1e7539c8bb6638401131a0f0cc442e425af91e6fec9fb887affa WatchSource:0}: Error finding container 5d612f0474fd1e7539c8bb6638401131a0f0cc442e425af91e6fec9fb887affa: Status 404 returned error can't find the container with id 5d612f0474fd1e7539c8bb6638401131a0f0cc442e425af91e6fec9fb887affa Feb 27 17:10:10 crc kubenswrapper[4708]: I0227 17:10:10.550701 4708 generic.go:334] "Generic (PLEG): container finished" podID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerID="38edc80902a25143eacbd08cfe2342323dcdb905c5a81d77795a3579ec07d4bb" exitCode=0 Feb 27 17:10:10 crc kubenswrapper[4708]: I0227 17:10:10.550801 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" event={"ID":"950927f1-3a77-4b7d-bec6-c669d6c60496","Type":"ContainerDied","Data":"38edc80902a25143eacbd08cfe2342323dcdb905c5a81d77795a3579ec07d4bb"} Feb 27 17:10:10 crc kubenswrapper[4708]: I0227 17:10:10.551340 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" event={"ID":"950927f1-3a77-4b7d-bec6-c669d6c60496","Type":"ContainerStarted","Data":"5d612f0474fd1e7539c8bb6638401131a0f0cc442e425af91e6fec9fb887affa"} Feb 27 17:10:10 crc kubenswrapper[4708]: I0227 17:10:10.586549 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:10 crc kubenswrapper[4708]: I0227 17:10:10.587039 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:10 crc kubenswrapper[4708]: I0227 17:10:10.632152 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:11 crc kubenswrapper[4708]: I0227 17:10:11.629758 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:12 crc kubenswrapper[4708]: I0227 17:10:12.571360 4708 generic.go:334] "Generic (PLEG): container finished" podID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerID="146dc8962746ae59644871e90284dc962fb3f1f55c3dcf66c1adb6ffcd3aec39" exitCode=0 Feb 27 17:10:12 crc kubenswrapper[4708]: I0227 17:10:12.571659 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" event={"ID":"950927f1-3a77-4b7d-bec6-c669d6c60496","Type":"ContainerDied","Data":"146dc8962746ae59644871e90284dc962fb3f1f55c3dcf66c1adb6ffcd3aec39"} Feb 27 17:10:13 crc kubenswrapper[4708]: I0227 17:10:13.143263 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tt4mx"] Feb 27 17:10:13 crc kubenswrapper[4708]: I0227 17:10:13.582964 4708 generic.go:334] "Generic (PLEG): container finished" podID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerID="ac5dadc7c7949c17d8bda594a9ad3af0f677f27203ed2fc08f6fc90a3accb6a0" exitCode=0 Feb 27 17:10:13 crc kubenswrapper[4708]: I0227 17:10:13.583103 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" event={"ID":"950927f1-3a77-4b7d-bec6-c669d6c60496","Type":"ContainerDied","Data":"ac5dadc7c7949c17d8bda594a9ad3af0f677f27203ed2fc08f6fc90a3accb6a0"} Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.591470 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-tt4mx" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="registry-server" containerID="cri-o://9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a" gracePeriod=2 Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.902715 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.947173 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-bundle\") pod \"950927f1-3a77-4b7d-bec6-c669d6c60496\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.947283 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-util\") pod \"950927f1-3a77-4b7d-bec6-c669d6c60496\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.947312 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l997r\" (UniqueName: \"kubernetes.io/projected/950927f1-3a77-4b7d-bec6-c669d6c60496-kube-api-access-l997r\") pod \"950927f1-3a77-4b7d-bec6-c669d6c60496\" (UID: \"950927f1-3a77-4b7d-bec6-c669d6c60496\") " Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.949182 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-bundle" (OuterVolumeSpecName: "bundle") pod "950927f1-3a77-4b7d-bec6-c669d6c60496" (UID: "950927f1-3a77-4b7d-bec6-c669d6c60496"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.954900 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/950927f1-3a77-4b7d-bec6-c669d6c60496-kube-api-access-l997r" (OuterVolumeSpecName: "kube-api-access-l997r") pod "950927f1-3a77-4b7d-bec6-c669d6c60496" (UID: "950927f1-3a77-4b7d-bec6-c669d6c60496"). InnerVolumeSpecName "kube-api-access-l997r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:10:14 crc kubenswrapper[4708]: I0227 17:10:14.970099 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-util" (OuterVolumeSpecName: "util") pod "950927f1-3a77-4b7d-bec6-c669d6c60496" (UID: "950927f1-3a77-4b7d-bec6-c669d6c60496"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.027622 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.049233 4708 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.049266 4708 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/950927f1-3a77-4b7d-bec6-c669d6c60496-util\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.049280 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l997r\" (UniqueName: \"kubernetes.io/projected/950927f1-3a77-4b7d-bec6-c669d6c60496-kube-api-access-l997r\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.149719 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-utilities\") pod \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.149789 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvz7t\" (UniqueName: \"kubernetes.io/projected/96443bed-22ed-4a2d-ae78-0ebf259f25e3-kube-api-access-zvz7t\") pod \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.149871 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-catalog-content\") pod \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\" (UID: \"96443bed-22ed-4a2d-ae78-0ebf259f25e3\") " Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.150916 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-utilities" (OuterVolumeSpecName: "utilities") pod "96443bed-22ed-4a2d-ae78-0ebf259f25e3" (UID: "96443bed-22ed-4a2d-ae78-0ebf259f25e3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.154252 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96443bed-22ed-4a2d-ae78-0ebf259f25e3-kube-api-access-zvz7t" (OuterVolumeSpecName: "kube-api-access-zvz7t") pod "96443bed-22ed-4a2d-ae78-0ebf259f25e3" (UID: "96443bed-22ed-4a2d-ae78-0ebf259f25e3"). InnerVolumeSpecName "kube-api-access-zvz7t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.251730 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.251779 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvz7t\" (UniqueName: \"kubernetes.io/projected/96443bed-22ed-4a2d-ae78-0ebf259f25e3-kube-api-access-zvz7t\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.331408 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "96443bed-22ed-4a2d-ae78-0ebf259f25e3" (UID: "96443bed-22ed-4a2d-ae78-0ebf259f25e3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.353707 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/96443bed-22ed-4a2d-ae78-0ebf259f25e3-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.605888 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.605884 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh" event={"ID":"950927f1-3a77-4b7d-bec6-c669d6c60496","Type":"ContainerDied","Data":"5d612f0474fd1e7539c8bb6638401131a0f0cc442e425af91e6fec9fb887affa"} Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.606521 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d612f0474fd1e7539c8bb6638401131a0f0cc442e425af91e6fec9fb887affa" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.610901 4708 generic.go:334] "Generic (PLEG): container finished" podID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerID="9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a" exitCode=0 Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.610985 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt4mx" event={"ID":"96443bed-22ed-4a2d-ae78-0ebf259f25e3","Type":"ContainerDied","Data":"9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a"} Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.611091 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-tt4mx" event={"ID":"96443bed-22ed-4a2d-ae78-0ebf259f25e3","Type":"ContainerDied","Data":"3c2a0478018e361d6a75b56b9f2ee938a82c153057dcca52675b3484200f9954"} Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.611136 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-tt4mx" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.611147 4708 scope.go:117] "RemoveContainer" containerID="9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.633474 4708 scope.go:117] "RemoveContainer" containerID="67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.672211 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-tt4mx"] Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.675477 4708 scope.go:117] "RemoveContainer" containerID="cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.680890 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-tt4mx"] Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.697789 4708 scope.go:117] "RemoveContainer" containerID="9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a" Feb 27 17:10:15 crc kubenswrapper[4708]: E0227 17:10:15.698461 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a\": container with ID starting with 9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a not found: ID does not exist" containerID="9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.698499 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a"} err="failed to get container status \"9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a\": rpc error: code = NotFound desc = could not find container \"9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a\": container with ID starting with 9a17c717531a2b29cfd10f0c12d7f97c08a6dd18eafbea76f9ad6c029c6e530a not found: ID does not exist" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.698526 4708 scope.go:117] "RemoveContainer" containerID="67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55" Feb 27 17:10:15 crc kubenswrapper[4708]: E0227 17:10:15.699025 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55\": container with ID starting with 67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55 not found: ID does not exist" containerID="67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.699096 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55"} err="failed to get container status \"67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55\": rpc error: code = NotFound desc = could not find container \"67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55\": container with ID starting with 67219863b9929f41dfb9c18456a8bb32e057d980f3c40798feb879f17e245c55 not found: ID does not exist" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.699140 4708 scope.go:117] "RemoveContainer" containerID="cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099" Feb 27 17:10:15 crc kubenswrapper[4708]: E0227 17:10:15.699673 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099\": container with ID starting with cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099 not found: ID does not exist" containerID="cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099" Feb 27 17:10:15 crc kubenswrapper[4708]: I0227 17:10:15.699715 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099"} err="failed to get container status \"cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099\": rpc error: code = NotFound desc = could not find container \"cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099\": container with ID starting with cec018ac1eb5a3f0236c16571de4c2dc42db51433acb5f6b29d9534d6cf3e099 not found: ID does not exist" Feb 27 17:10:15 crc kubenswrapper[4708]: E0227 17:10:15.759384 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96443bed_22ed_4a2d_ae78_0ebf259f25e3.slice/crio-3c2a0478018e361d6a75b56b9f2ee938a82c153057dcca52675b3484200f9954\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod96443bed_22ed_4a2d_ae78_0ebf259f25e3.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod950927f1_3a77_4b7d_bec6_c669d6c60496.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:10:16 crc kubenswrapper[4708]: I0227 17:10:16.237429 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" path="/var/lib/kubelet/pods/96443bed-22ed-4a2d-ae78-0ebf259f25e3/volumes" Feb 27 17:10:23 crc kubenswrapper[4708]: I0227 17:10:23.315593 4708 scope.go:117] "RemoveContainer" containerID="39899fe21d70373809577aba9526e08716e3482cfa79929bdbe852ac9482d42a" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104067 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9"] Feb 27 17:10:25 crc kubenswrapper[4708]: E0227 17:10:25.104506 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerName="util" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104519 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerName="util" Feb 27 17:10:25 crc kubenswrapper[4708]: E0227 17:10:25.104526 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="extract-utilities" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104544 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="extract-utilities" Feb 27 17:10:25 crc kubenswrapper[4708]: E0227 17:10:25.104554 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="extract-content" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104561 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="extract-content" Feb 27 17:10:25 crc kubenswrapper[4708]: E0227 17:10:25.104578 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerName="pull" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104583 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerName="pull" Feb 27 17:10:25 crc kubenswrapper[4708]: E0227 17:10:25.104594 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="registry-server" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104600 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="registry-server" Feb 27 17:10:25 crc kubenswrapper[4708]: E0227 17:10:25.104608 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerName="extract" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104613 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerName="extract" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104720 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="950927f1-3a77-4b7d-bec6-c669d6c60496" containerName="extract" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.104733 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="96443bed-22ed-4a2d-ae78-0ebf259f25e3" containerName="registry-server" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.105121 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.107701 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.108211 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.108421 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.108461 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.108557 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-tztlw" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.128793 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9"] Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.195888 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sp5q\" (UniqueName: \"kubernetes.io/projected/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-kube-api-access-4sp5q\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.195968 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-apiservice-cert\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.195987 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-webhook-cert\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.296993 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4sp5q\" (UniqueName: \"kubernetes.io/projected/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-kube-api-access-4sp5q\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.297050 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-apiservice-cert\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.297068 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-webhook-cert\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.304478 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-webhook-cert\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.304478 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-apiservice-cert\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.318372 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4sp5q\" (UniqueName: \"kubernetes.io/projected/38a1edba-b7f3-4051-bb7f-9f7c5ecea249-kube-api-access-4sp5q\") pod \"metallb-operator-controller-manager-7d79b99f67-pbln9\" (UID: \"38a1edba-b7f3-4051-bb7f-9f7c5ecea249\") " pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.348617 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj"] Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.349392 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.350780 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.350944 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.351388 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-5gqsh" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.369268 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj"] Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.398655 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c990cc98-0533-4790-8569-2c5b1f52f353-apiservice-cert\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.398765 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c990cc98-0533-4790-8569-2c5b1f52f353-webhook-cert\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.398806 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfh42\" (UniqueName: \"kubernetes.io/projected/c990cc98-0533-4790-8569-2c5b1f52f353-kube-api-access-cfh42\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.418610 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.500360 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c990cc98-0533-4790-8569-2c5b1f52f353-apiservice-cert\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.500428 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c990cc98-0533-4790-8569-2c5b1f52f353-webhook-cert\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.500447 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfh42\" (UniqueName: \"kubernetes.io/projected/c990cc98-0533-4790-8569-2c5b1f52f353-kube-api-access-cfh42\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.505648 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c990cc98-0533-4790-8569-2c5b1f52f353-apiservice-cert\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.506200 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c990cc98-0533-4790-8569-2c5b1f52f353-webhook-cert\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.546484 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfh42\" (UniqueName: \"kubernetes.io/projected/c990cc98-0533-4790-8569-2c5b1f52f353-kube-api-access-cfh42\") pod \"metallb-operator-webhook-server-79f55df48d-fgptj\" (UID: \"c990cc98-0533-4790-8569-2c5b1f52f353\") " pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.663577 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:25 crc kubenswrapper[4708]: I0227 17:10:25.670331 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9"] Feb 27 17:10:25 crc kubenswrapper[4708]: W0227 17:10:25.683802 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38a1edba_b7f3_4051_bb7f_9f7c5ecea249.slice/crio-a9813513c78384e9905df1453df2b3451d67100f0ff6ff69848a4494b1691350 WatchSource:0}: Error finding container a9813513c78384e9905df1453df2b3451d67100f0ff6ff69848a4494b1691350: Status 404 returned error can't find the container with id a9813513c78384e9905df1453df2b3451d67100f0ff6ff69848a4494b1691350 Feb 27 17:10:26 crc kubenswrapper[4708]: I0227 17:10:26.208727 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj"] Feb 27 17:10:26 crc kubenswrapper[4708]: W0227 17:10:26.227307 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc990cc98_0533_4790_8569_2c5b1f52f353.slice/crio-2ab484216adc82c8725f17d1752984fc8121a92c6b017916c6386154bc2da7ac WatchSource:0}: Error finding container 2ab484216adc82c8725f17d1752984fc8121a92c6b017916c6386154bc2da7ac: Status 404 returned error can't find the container with id 2ab484216adc82c8725f17d1752984fc8121a92c6b017916c6386154bc2da7ac Feb 27 17:10:26 crc kubenswrapper[4708]: I0227 17:10:26.702713 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" event={"ID":"38a1edba-b7f3-4051-bb7f-9f7c5ecea249","Type":"ContainerStarted","Data":"a9813513c78384e9905df1453df2b3451d67100f0ff6ff69848a4494b1691350"} Feb 27 17:10:26 crc kubenswrapper[4708]: I0227 17:10:26.705093 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" event={"ID":"c990cc98-0533-4790-8569-2c5b1f52f353","Type":"ContainerStarted","Data":"2ab484216adc82c8725f17d1752984fc8121a92c6b017916c6386154bc2da7ac"} Feb 27 17:10:31 crc kubenswrapper[4708]: I0227 17:10:31.767665 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" event={"ID":"38a1edba-b7f3-4051-bb7f-9f7c5ecea249","Type":"ContainerStarted","Data":"94a2f611fb460a5ed379d8be2871fda2717ae396c6efe81eb0e621e30377513d"} Feb 27 17:10:31 crc kubenswrapper[4708]: I0227 17:10:31.768735 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:10:31 crc kubenswrapper[4708]: I0227 17:10:31.770300 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" event={"ID":"c990cc98-0533-4790-8569-2c5b1f52f353","Type":"ContainerStarted","Data":"d47e7d2a75a2024378f20a902a9b24afefe6fe494efe7deef9c191fee5e725db"} Feb 27 17:10:31 crc kubenswrapper[4708]: I0227 17:10:31.770510 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:10:31 crc kubenswrapper[4708]: I0227 17:10:31.794917 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" podStartSLOduration=1.6978733670000001 podStartE2EDuration="6.794900375s" podCreationTimestamp="2026-02-27 17:10:25 +0000 UTC" firstStartedPulling="2026-02-27 17:10:25.68675688 +0000 UTC m=+1024.202554467" lastFinishedPulling="2026-02-27 17:10:30.783783888 +0000 UTC m=+1029.299581475" observedRunningTime="2026-02-27 17:10:31.793201972 +0000 UTC m=+1030.308999579" watchObservedRunningTime="2026-02-27 17:10:31.794900375 +0000 UTC m=+1030.310697972" Feb 27 17:10:31 crc kubenswrapper[4708]: I0227 17:10:31.815345 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" podStartSLOduration=2.260927759 podStartE2EDuration="6.815324369s" podCreationTimestamp="2026-02-27 17:10:25 +0000 UTC" firstStartedPulling="2026-02-27 17:10:26.23029622 +0000 UTC m=+1024.746093807" lastFinishedPulling="2026-02-27 17:10:30.78469283 +0000 UTC m=+1029.300490417" observedRunningTime="2026-02-27 17:10:31.813265057 +0000 UTC m=+1030.329062654" watchObservedRunningTime="2026-02-27 17:10:31.815324369 +0000 UTC m=+1030.331121966" Feb 27 17:10:45 crc kubenswrapper[4708]: I0227 17:10:45.669513 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-79f55df48d-fgptj" Feb 27 17:11:05 crc kubenswrapper[4708]: I0227 17:11:05.422620 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-7d79b99f67-pbln9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.220546 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-k8mc9"] Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.222799 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.226624 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-7mc2r" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.229284 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.237157 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.243712 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz"] Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.244428 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.247588 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.255840 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz"] Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.328922 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-reloader\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.328980 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics-certs\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.329005 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-startup\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.329055 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.329094 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-conf\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.329130 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-sockets\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.329155 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgr2d\" (UniqueName: \"kubernetes.io/projected/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-kube-api-access-zgr2d\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.335338 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-26sk8"] Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.344425 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.345454 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-86ddb6bd46-lpk95"] Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.351194 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.351696 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.351811 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-b6mdd" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.352511 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.352837 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.360878 4708 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.366030 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-lpk95"] Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.430202 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.430249 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c250b4a-ba60-4846-a259-3a5f04f9142a-cert\") pod \"frr-k8s-webhook-server-7f989f654f-cnmrz\" (UID: \"6c250b4a-ba60-4846-a259-3a5f04f9142a\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.430270 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrlvl\" (UniqueName: \"kubernetes.io/projected/6c250b4a-ba60-4846-a259-3a5f04f9142a-kube-api-access-jrlvl\") pod \"frr-k8s-webhook-server-7f989f654f-cnmrz\" (UID: \"6c250b4a-ba60-4846-a259-3a5f04f9142a\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.430301 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-conf\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.430328 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-sockets\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.430931 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.431114 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-conf\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.431277 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-sockets\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.430349 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgr2d\" (UniqueName: \"kubernetes.io/projected/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-kube-api-access-zgr2d\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.431316 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-reloader\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.431337 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics-certs\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.431355 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-startup\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.432159 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-frr-startup\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.432337 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-reloader\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: E0227 17:11:06.432392 4708 secret.go:188] Couldn't get secret metallb-system/frr-k8s-certs-secret: secret "frr-k8s-certs-secret" not found Feb 27 17:11:06 crc kubenswrapper[4708]: E0227 17:11:06.432428 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics-certs podName:ab09f69e-3ca1-4192-b224-59fd8ce9ad0c nodeName:}" failed. No retries permitted until 2026-02-27 17:11:06.932416421 +0000 UTC m=+1065.448214008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics-certs") pod "frr-k8s-k8mc9" (UID: "ab09f69e-3ca1-4192-b224-59fd8ce9ad0c") : secret "frr-k8s-certs-secret" not found Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.455613 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgr2d\" (UniqueName: \"kubernetes.io/projected/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-kube-api-access-zgr2d\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.531979 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clgd6\" (UniqueName: \"kubernetes.io/projected/79f40a52-2cad-44c5-8698-3738361bcafa-kube-api-access-clgd6\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532032 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grbfc\" (UniqueName: \"kubernetes.io/projected/684605c3-e5a8-4755-953e-84a8a4ab3e2e-kube-api-access-grbfc\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532060 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79f40a52-2cad-44c5-8698-3738361bcafa-cert\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532075 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79f40a52-2cad-44c5-8698-3738361bcafa-metrics-certs\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532100 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/684605c3-e5a8-4755-953e-84a8a4ab3e2e-metallb-excludel2\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532117 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532138 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c250b4a-ba60-4846-a259-3a5f04f9142a-cert\") pod \"frr-k8s-webhook-server-7f989f654f-cnmrz\" (UID: \"6c250b4a-ba60-4846-a259-3a5f04f9142a\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532156 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrlvl\" (UniqueName: \"kubernetes.io/projected/6c250b4a-ba60-4846-a259-3a5f04f9142a-kube-api-access-jrlvl\") pod \"frr-k8s-webhook-server-7f989f654f-cnmrz\" (UID: \"6c250b4a-ba60-4846-a259-3a5f04f9142a\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.532195 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-metrics-certs\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.535742 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6c250b4a-ba60-4846-a259-3a5f04f9142a-cert\") pod \"frr-k8s-webhook-server-7f989f654f-cnmrz\" (UID: \"6c250b4a-ba60-4846-a259-3a5f04f9142a\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.548248 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrlvl\" (UniqueName: \"kubernetes.io/projected/6c250b4a-ba60-4846-a259-3a5f04f9142a-kube-api-access-jrlvl\") pod \"frr-k8s-webhook-server-7f989f654f-cnmrz\" (UID: \"6c250b4a-ba60-4846-a259-3a5f04f9142a\") " pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.566705 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.633709 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grbfc\" (UniqueName: \"kubernetes.io/projected/684605c3-e5a8-4755-953e-84a8a4ab3e2e-kube-api-access-grbfc\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.633764 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79f40a52-2cad-44c5-8698-3738361bcafa-cert\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.633784 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79f40a52-2cad-44c5-8698-3738361bcafa-metrics-certs\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.633809 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/684605c3-e5a8-4755-953e-84a8a4ab3e2e-metallb-excludel2\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.633827 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.633885 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-metrics-certs\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.633918 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clgd6\" (UniqueName: \"kubernetes.io/projected/79f40a52-2cad-44c5-8698-3738361bcafa-kube-api-access-clgd6\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.634821 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/684605c3-e5a8-4755-953e-84a8a4ab3e2e-metallb-excludel2\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: E0227 17:11:06.635210 4708 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 17:11:06 crc kubenswrapper[4708]: E0227 17:11:06.635253 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist podName:684605c3-e5a8-4755-953e-84a8a4ab3e2e nodeName:}" failed. No retries permitted until 2026-02-27 17:11:07.135241669 +0000 UTC m=+1065.651039256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist") pod "speaker-26sk8" (UID: "684605c3-e5a8-4755-953e-84a8a4ab3e2e") : secret "metallb-memberlist" not found Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.637407 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/79f40a52-2cad-44c5-8698-3738361bcafa-metrics-certs\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.639606 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-metrics-certs\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.640302 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/79f40a52-2cad-44c5-8698-3738361bcafa-cert\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.649455 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grbfc\" (UniqueName: \"kubernetes.io/projected/684605c3-e5a8-4755-953e-84a8a4ab3e2e-kube-api-access-grbfc\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.651806 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clgd6\" (UniqueName: \"kubernetes.io/projected/79f40a52-2cad-44c5-8698-3738361bcafa-kube-api-access-clgd6\") pod \"controller-86ddb6bd46-lpk95\" (UID: \"79f40a52-2cad-44c5-8698-3738361bcafa\") " pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.670863 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.938041 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics-certs\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.943309 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab09f69e-3ca1-4192-b224-59fd8ce9ad0c-metrics-certs\") pod \"frr-k8s-k8mc9\" (UID: \"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c\") " pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:06 crc kubenswrapper[4708]: I0227 17:11:06.975798 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz"] Feb 27 17:11:07 crc kubenswrapper[4708]: I0227 17:11:07.042200 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" event={"ID":"6c250b4a-ba60-4846-a259-3a5f04f9142a","Type":"ContainerStarted","Data":"bc70dd4d992866cda1343bc55b92532776bba37f80e280f670a08f1e2bebbf35"} Feb 27 17:11:07 crc kubenswrapper[4708]: I0227 17:11:07.068636 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-86ddb6bd46-lpk95"] Feb 27 17:11:07 crc kubenswrapper[4708]: W0227 17:11:07.080821 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod79f40a52_2cad_44c5_8698_3738361bcafa.slice/crio-24433cbe30685a992af93b45c17938b9b511ae4d29bf189a1d8d8925dcfe006c WatchSource:0}: Error finding container 24433cbe30685a992af93b45c17938b9b511ae4d29bf189a1d8d8925dcfe006c: Status 404 returned error can't find the container with id 24433cbe30685a992af93b45c17938b9b511ae4d29bf189a1d8d8925dcfe006c Feb 27 17:11:07 crc kubenswrapper[4708]: I0227 17:11:07.141790 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:07 crc kubenswrapper[4708]: E0227 17:11:07.142222 4708 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Feb 27 17:11:07 crc kubenswrapper[4708]: E0227 17:11:07.142304 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist podName:684605c3-e5a8-4755-953e-84a8a4ab3e2e nodeName:}" failed. No retries permitted until 2026-02-27 17:11:08.142278304 +0000 UTC m=+1066.658075921 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist") pod "speaker-26sk8" (UID: "684605c3-e5a8-4755-953e-84a8a4ab3e2e") : secret "metallb-memberlist" not found Feb 27 17:11:07 crc kubenswrapper[4708]: I0227 17:11:07.150114 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.053497 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-lpk95" event={"ID":"79f40a52-2cad-44c5-8698-3738361bcafa","Type":"ContainerStarted","Data":"9d7fe7a3b184a519fa1ce6a35f5f182fb6e552e8018b736a34c1733439e1051c"} Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.053548 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-lpk95" event={"ID":"79f40a52-2cad-44c5-8698-3738361bcafa","Type":"ContainerStarted","Data":"2cbacc8d21d9787b7a4c88f2593deb1ea3fc6353c618303ab60dc46024bf3b01"} Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.053563 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-86ddb6bd46-lpk95" event={"ID":"79f40a52-2cad-44c5-8698-3738361bcafa","Type":"ContainerStarted","Data":"24433cbe30685a992af93b45c17938b9b511ae4d29bf189a1d8d8925dcfe006c"} Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.055942 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerStarted","Data":"217f0c76a47c991caeef83b90e080fd5e7df00453c212b6a644af3ce8da12347"} Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.078731 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-86ddb6bd46-lpk95" podStartSLOduration=2.078713546 podStartE2EDuration="2.078713546s" podCreationTimestamp="2026-02-27 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:11:08.074127946 +0000 UTC m=+1066.589925553" watchObservedRunningTime="2026-02-27 17:11:08.078713546 +0000 UTC m=+1066.594511153" Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.157111 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.164575 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/684605c3-e5a8-4755-953e-84a8a4ab3e2e-memberlist\") pod \"speaker-26sk8\" (UID: \"684605c3-e5a8-4755-953e-84a8a4ab3e2e\") " pod="metallb-system/speaker-26sk8" Feb 27 17:11:08 crc kubenswrapper[4708]: I0227 17:11:08.164821 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-26sk8" Feb 27 17:11:09 crc kubenswrapper[4708]: I0227 17:11:09.067391 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-26sk8" event={"ID":"684605c3-e5a8-4755-953e-84a8a4ab3e2e","Type":"ContainerStarted","Data":"a1c06dd4e6a8277a26ebaf135de17f9c2a33fd2ffef4ce81bd1c99f10c7537f4"} Feb 27 17:11:09 crc kubenswrapper[4708]: I0227 17:11:09.067643 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:09 crc kubenswrapper[4708]: I0227 17:11:09.067657 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-26sk8" event={"ID":"684605c3-e5a8-4755-953e-84a8a4ab3e2e","Type":"ContainerStarted","Data":"d05248dc569f98809b78310ca897fa2eb67f8313b043564bbbdb7c8943ae5f1b"} Feb 27 17:11:09 crc kubenswrapper[4708]: I0227 17:11:09.067666 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-26sk8" event={"ID":"684605c3-e5a8-4755-953e-84a8a4ab3e2e","Type":"ContainerStarted","Data":"e93962b73796e5883c0632213e0c9041e3dcb292492270efc87ca66ea3b60e8d"} Feb 27 17:11:09 crc kubenswrapper[4708]: I0227 17:11:09.067860 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-26sk8" Feb 27 17:11:09 crc kubenswrapper[4708]: I0227 17:11:09.094439 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-26sk8" podStartSLOduration=3.09442388 podStartE2EDuration="3.09442388s" podCreationTimestamp="2026-02-27 17:11:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:11:09.093653588 +0000 UTC m=+1067.609451175" watchObservedRunningTime="2026-02-27 17:11:09.09442388 +0000 UTC m=+1067.610221457" Feb 27 17:11:16 crc kubenswrapper[4708]: I0227 17:11:16.130605 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" event={"ID":"6c250b4a-ba60-4846-a259-3a5f04f9142a","Type":"ContainerStarted","Data":"7cda683ffb91648090b1beb325420a65ef6de7abed1ec1eba78cd4d4735f21c1"} Feb 27 17:11:16 crc kubenswrapper[4708]: I0227 17:11:16.131389 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:16 crc kubenswrapper[4708]: I0227 17:11:16.132691 4708 generic.go:334] "Generic (PLEG): container finished" podID="ab09f69e-3ca1-4192-b224-59fd8ce9ad0c" containerID="9779531605cdf673c547f5103c8535ccc952f949320569ee50039ad2b9426f64" exitCode=0 Feb 27 17:11:16 crc kubenswrapper[4708]: I0227 17:11:16.132744 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerDied","Data":"9779531605cdf673c547f5103c8535ccc952f949320569ee50039ad2b9426f64"} Feb 27 17:11:16 crc kubenswrapper[4708]: I0227 17:11:16.158980 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" podStartSLOduration=2.131638083 podStartE2EDuration="10.158957789s" podCreationTimestamp="2026-02-27 17:11:06 +0000 UTC" firstStartedPulling="2026-02-27 17:11:06.982551895 +0000 UTC m=+1065.498349522" lastFinishedPulling="2026-02-27 17:11:15.009871631 +0000 UTC m=+1073.525669228" observedRunningTime="2026-02-27 17:11:16.154469862 +0000 UTC m=+1074.670267459" watchObservedRunningTime="2026-02-27 17:11:16.158957789 +0000 UTC m=+1074.674755396" Feb 27 17:11:17 crc kubenswrapper[4708]: I0227 17:11:17.146915 4708 generic.go:334] "Generic (PLEG): container finished" podID="ab09f69e-3ca1-4192-b224-59fd8ce9ad0c" containerID="01be2e675c913245ec6320079c7d8ce43516335da3cd7cf9432a55444fe30887" exitCode=0 Feb 27 17:11:17 crc kubenswrapper[4708]: I0227 17:11:17.148030 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerDied","Data":"01be2e675c913245ec6320079c7d8ce43516335da3cd7cf9432a55444fe30887"} Feb 27 17:11:18 crc kubenswrapper[4708]: I0227 17:11:18.160953 4708 generic.go:334] "Generic (PLEG): container finished" podID="ab09f69e-3ca1-4192-b224-59fd8ce9ad0c" containerID="1b4e32f7eaae378c3bd71ce639fbd6133704474be076e54d6488b23be830e2cd" exitCode=0 Feb 27 17:11:18 crc kubenswrapper[4708]: I0227 17:11:18.161084 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerDied","Data":"1b4e32f7eaae378c3bd71ce639fbd6133704474be076e54d6488b23be830e2cd"} Feb 27 17:11:18 crc kubenswrapper[4708]: I0227 17:11:18.171041 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-26sk8" Feb 27 17:11:19 crc kubenswrapper[4708]: I0227 17:11:19.171092 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerStarted","Data":"ba49010801c89cd663af2b2c247fb2fee2697195ea4ea48860a13acd5288cbf0"} Feb 27 17:11:19 crc kubenswrapper[4708]: I0227 17:11:19.171382 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerStarted","Data":"2a0a7b464674d02bebd014f1486053548f7945cd0438ce4d033030d8adb715d9"} Feb 27 17:11:19 crc kubenswrapper[4708]: I0227 17:11:19.171394 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerStarted","Data":"c88a898eb311718b97bced47c83269ad3a71e49b17b87b741625ce3becdd5c6a"} Feb 27 17:11:19 crc kubenswrapper[4708]: I0227 17:11:19.171406 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerStarted","Data":"69376796187fc6741fa78a88a6152ddd2e0edbe6b7ad35274eacf511d62c390b"} Feb 27 17:11:20 crc kubenswrapper[4708]: I0227 17:11:20.184461 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerStarted","Data":"5b7011e38c4e31f1174d821487958e8e5f42fab68dc28fa94af79efffddefdd5"} Feb 27 17:11:20 crc kubenswrapper[4708]: I0227 17:11:20.184506 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-k8mc9" event={"ID":"ab09f69e-3ca1-4192-b224-59fd8ce9ad0c","Type":"ContainerStarted","Data":"2421ed4f3f4cf72a9b1fee94bbd3d3b3f24f5d460f303fa1f734394313442f8c"} Feb 27 17:11:20 crc kubenswrapper[4708]: I0227 17:11:20.184693 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:20 crc kubenswrapper[4708]: I0227 17:11:20.217563 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-k8mc9" podStartSLOduration=6.408708702 podStartE2EDuration="14.217537357s" podCreationTimestamp="2026-02-27 17:11:06 +0000 UTC" firstStartedPulling="2026-02-27 17:11:07.255787365 +0000 UTC m=+1065.771584962" lastFinishedPulling="2026-02-27 17:11:15.06461602 +0000 UTC m=+1073.580413617" observedRunningTime="2026-02-27 17:11:20.213685688 +0000 UTC m=+1078.729483325" watchObservedRunningTime="2026-02-27 17:11:20.217537357 +0000 UTC m=+1078.733334954" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.165765 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-c9rhn"] Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.166997 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9rhn" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.170206 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-t8fhk" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.171096 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.171616 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.182553 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c9rhn"] Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.248973 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2542\" (UniqueName: \"kubernetes.io/projected/fa18ff6a-6113-4d56-bc69-c0959e6bb8a6-kube-api-access-j2542\") pod \"openstack-operator-index-c9rhn\" (UID: \"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6\") " pod="openstack-operators/openstack-operator-index-c9rhn" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.350450 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j2542\" (UniqueName: \"kubernetes.io/projected/fa18ff6a-6113-4d56-bc69-c0959e6bb8a6-kube-api-access-j2542\") pod \"openstack-operator-index-c9rhn\" (UID: \"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6\") " pod="openstack-operators/openstack-operator-index-c9rhn" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.368072 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j2542\" (UniqueName: \"kubernetes.io/projected/fa18ff6a-6113-4d56-bc69-c0959e6bb8a6-kube-api-access-j2542\") pod \"openstack-operator-index-c9rhn\" (UID: \"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6\") " pod="openstack-operators/openstack-operator-index-c9rhn" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.494770 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9rhn" Feb 27 17:11:21 crc kubenswrapper[4708]: I0227 17:11:21.963387 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-c9rhn"] Feb 27 17:11:21 crc kubenswrapper[4708]: W0227 17:11:21.983213 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfa18ff6a_6113_4d56_bc69_c0959e6bb8a6.slice/crio-96dd45974f11140fa913321bb04d2d2c504a0b1d8ec57897fa4e4401eec9dcc4 WatchSource:0}: Error finding container 96dd45974f11140fa913321bb04d2d2c504a0b1d8ec57897fa4e4401eec9dcc4: Status 404 returned error can't find the container with id 96dd45974f11140fa913321bb04d2d2c504a0b1d8ec57897fa4e4401eec9dcc4 Feb 27 17:11:22 crc kubenswrapper[4708]: I0227 17:11:22.151125 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:22 crc kubenswrapper[4708]: I0227 17:11:22.195613 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:22 crc kubenswrapper[4708]: I0227 17:11:22.205611 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9rhn" event={"ID":"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6","Type":"ContainerStarted","Data":"96dd45974f11140fa913321bb04d2d2c504a0b1d8ec57897fa4e4401eec9dcc4"} Feb 27 17:11:24 crc kubenswrapper[4708]: I0227 17:11:24.539580 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-c9rhn"] Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.142657 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-zk98t"] Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.144385 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.161906 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zk98t"] Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.207238 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jc2l\" (UniqueName: \"kubernetes.io/projected/779d3205-e15f-4447-9e95-256243b04cf3-kube-api-access-6jc2l\") pod \"openstack-operator-index-zk98t\" (UID: \"779d3205-e15f-4447-9e95-256243b04cf3\") " pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.232369 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9rhn" event={"ID":"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6","Type":"ContainerStarted","Data":"a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914"} Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.232675 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-c9rhn" podUID="fa18ff6a-6113-4d56-bc69-c0959e6bb8a6" containerName="registry-server" containerID="cri-o://a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914" gracePeriod=2 Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.259247 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-c9rhn" podStartSLOduration=1.63434447 podStartE2EDuration="4.259221508s" podCreationTimestamp="2026-02-27 17:11:21 +0000 UTC" firstStartedPulling="2026-02-27 17:11:21.986209254 +0000 UTC m=+1080.502006841" lastFinishedPulling="2026-02-27 17:11:24.611086282 +0000 UTC m=+1083.126883879" observedRunningTime="2026-02-27 17:11:25.252002934 +0000 UTC m=+1083.767800561" watchObservedRunningTime="2026-02-27 17:11:25.259221508 +0000 UTC m=+1083.775019125" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.317838 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jc2l\" (UniqueName: \"kubernetes.io/projected/779d3205-e15f-4447-9e95-256243b04cf3-kube-api-access-6jc2l\") pod \"openstack-operator-index-zk98t\" (UID: \"779d3205-e15f-4447-9e95-256243b04cf3\") " pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.353594 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jc2l\" (UniqueName: \"kubernetes.io/projected/779d3205-e15f-4447-9e95-256243b04cf3-kube-api-access-6jc2l\") pod \"openstack-operator-index-zk98t\" (UID: \"779d3205-e15f-4447-9e95-256243b04cf3\") " pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.498392 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.593999 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9rhn" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.724474 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j2542\" (UniqueName: \"kubernetes.io/projected/fa18ff6a-6113-4d56-bc69-c0959e6bb8a6-kube-api-access-j2542\") pod \"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6\" (UID: \"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6\") " Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.730674 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa18ff6a-6113-4d56-bc69-c0959e6bb8a6-kube-api-access-j2542" (OuterVolumeSpecName: "kube-api-access-j2542") pod "fa18ff6a-6113-4d56-bc69-c0959e6bb8a6" (UID: "fa18ff6a-6113-4d56-bc69-c0959e6bb8a6"). InnerVolumeSpecName "kube-api-access-j2542". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.827820 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j2542\" (UniqueName: \"kubernetes.io/projected/fa18ff6a-6113-4d56-bc69-c0959e6bb8a6-kube-api-access-j2542\") on node \"crc\" DevicePath \"\"" Feb 27 17:11:25 crc kubenswrapper[4708]: W0227 17:11:25.900985 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod779d3205_e15f_4447_9e95_256243b04cf3.slice/crio-94a7f5935967aaad5714f77bc89ccdd865a8df0966b922a76c32063a2d1d91ea WatchSource:0}: Error finding container 94a7f5935967aaad5714f77bc89ccdd865a8df0966b922a76c32063a2d1d91ea: Status 404 returned error can't find the container with id 94a7f5935967aaad5714f77bc89ccdd865a8df0966b922a76c32063a2d1d91ea Feb 27 17:11:25 crc kubenswrapper[4708]: I0227 17:11:25.902784 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-zk98t"] Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.243588 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zk98t" event={"ID":"779d3205-e15f-4447-9e95-256243b04cf3","Type":"ContainerStarted","Data":"2ee515522f6c7bd439588fc0c8220d880f856a46eb2d22e457ea649e66eef30c"} Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.243961 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-zk98t" event={"ID":"779d3205-e15f-4447-9e95-256243b04cf3","Type":"ContainerStarted","Data":"94a7f5935967aaad5714f77bc89ccdd865a8df0966b922a76c32063a2d1d91ea"} Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.246548 4708 generic.go:334] "Generic (PLEG): container finished" podID="fa18ff6a-6113-4d56-bc69-c0959e6bb8a6" containerID="a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914" exitCode=0 Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.246622 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9rhn" event={"ID":"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6","Type":"ContainerDied","Data":"a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914"} Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.246661 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-c9rhn" event={"ID":"fa18ff6a-6113-4d56-bc69-c0959e6bb8a6","Type":"ContainerDied","Data":"96dd45974f11140fa913321bb04d2d2c504a0b1d8ec57897fa4e4401eec9dcc4"} Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.246677 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-c9rhn" Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.246691 4708 scope.go:117] "RemoveContainer" containerID="a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914" Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.291054 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-zk98t" podStartSLOduration=1.22249791 podStartE2EDuration="1.291037579s" podCreationTimestamp="2026-02-27 17:11:25 +0000 UTC" firstStartedPulling="2026-02-27 17:11:25.906518551 +0000 UTC m=+1084.422316178" lastFinishedPulling="2026-02-27 17:11:25.97505822 +0000 UTC m=+1084.490855847" observedRunningTime="2026-02-27 17:11:26.290290828 +0000 UTC m=+1084.806088455" watchObservedRunningTime="2026-02-27 17:11:26.291037579 +0000 UTC m=+1084.806835176" Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.309840 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-c9rhn"] Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.314024 4708 scope.go:117] "RemoveContainer" containerID="a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914" Feb 27 17:11:26 crc kubenswrapper[4708]: E0227 17:11:26.314502 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914\": container with ID starting with a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914 not found: ID does not exist" containerID="a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914" Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.314562 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914"} err="failed to get container status \"a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914\": rpc error: code = NotFound desc = could not find container \"a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914\": container with ID starting with a2a97842d37736215b4306b3ef68d44cd84f70d96cb606a64603ea912b48d914 not found: ID does not exist" Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.315680 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-c9rhn"] Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.572926 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7f989f654f-cnmrz" Feb 27 17:11:26 crc kubenswrapper[4708]: I0227 17:11:26.678263 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-86ddb6bd46-lpk95" Feb 27 17:11:28 crc kubenswrapper[4708]: I0227 17:11:28.241944 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa18ff6a-6113-4d56-bc69-c0959e6bb8a6" path="/var/lib/kubelet/pods/fa18ff6a-6113-4d56-bc69-c0959e6bb8a6/volumes" Feb 27 17:11:35 crc kubenswrapper[4708]: I0227 17:11:35.498903 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:35 crc kubenswrapper[4708]: I0227 17:11:35.499547 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:35 crc kubenswrapper[4708]: I0227 17:11:35.532280 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:36 crc kubenswrapper[4708]: I0227 17:11:36.391563 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-zk98t" Feb 27 17:11:37 crc kubenswrapper[4708]: I0227 17:11:37.155341 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-k8mc9" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.801111 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g"] Feb 27 17:11:42 crc kubenswrapper[4708]: E0227 17:11:42.802724 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa18ff6a-6113-4d56-bc69-c0959e6bb8a6" containerName="registry-server" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.802752 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa18ff6a-6113-4d56-bc69-c0959e6bb8a6" containerName="registry-server" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.803048 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa18ff6a-6113-4d56-bc69-c0959e6bb8a6" containerName="registry-server" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.804837 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.810426 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-qm9kd" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.823610 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g"] Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.990730 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-bundle\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.990913 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-util\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:42 crc kubenswrapper[4708]: I0227 17:11:42.990966 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqqgq\" (UniqueName: \"kubernetes.io/projected/18f2082e-b7e1-4045-9853-b790e42cbe82-kube-api-access-fqqgq\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.093009 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-bundle\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.093154 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-util\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.093212 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqqgq\" (UniqueName: \"kubernetes.io/projected/18f2082e-b7e1-4045-9853-b790e42cbe82-kube-api-access-fqqgq\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.093908 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-util\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.094380 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-bundle\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.133239 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqqgq\" (UniqueName: \"kubernetes.io/projected/18f2082e-b7e1-4045-9853-b790e42cbe82-kube-api-access-fqqgq\") pod \"b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.427652 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:43 crc kubenswrapper[4708]: I0227 17:11:43.731923 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g"] Feb 27 17:11:44 crc kubenswrapper[4708]: I0227 17:11:44.400662 4708 generic.go:334] "Generic (PLEG): container finished" podID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerID="fec1f21c321d15e1fbac3f47015bbe7ffaf19ebc3da3bfd18bcc6e8a505b4ef5" exitCode=0 Feb 27 17:11:44 crc kubenswrapper[4708]: I0227 17:11:44.400731 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" event={"ID":"18f2082e-b7e1-4045-9853-b790e42cbe82","Type":"ContainerDied","Data":"fec1f21c321d15e1fbac3f47015bbe7ffaf19ebc3da3bfd18bcc6e8a505b4ef5"} Feb 27 17:11:44 crc kubenswrapper[4708]: I0227 17:11:44.401092 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" event={"ID":"18f2082e-b7e1-4045-9853-b790e42cbe82","Type":"ContainerStarted","Data":"7d6ad43959d4f193c08629046d53e3eebc07ba5f6bc223bbcaf24c19120110e7"} Feb 27 17:11:45 crc kubenswrapper[4708]: I0227 17:11:45.413495 4708 generic.go:334] "Generic (PLEG): container finished" podID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerID="b39ac461eaf380da9510ef9f4bd077861dc225ce126430bddf7c97b2f2d03cd8" exitCode=0 Feb 27 17:11:45 crc kubenswrapper[4708]: I0227 17:11:45.413593 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" event={"ID":"18f2082e-b7e1-4045-9853-b790e42cbe82","Type":"ContainerDied","Data":"b39ac461eaf380da9510ef9f4bd077861dc225ce126430bddf7c97b2f2d03cd8"} Feb 27 17:11:46 crc kubenswrapper[4708]: I0227 17:11:46.427147 4708 generic.go:334] "Generic (PLEG): container finished" podID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerID="1628bd42e2134f47892db44a712e3cce57f1d894108429995266d7e1defbfc0d" exitCode=0 Feb 27 17:11:46 crc kubenswrapper[4708]: I0227 17:11:46.427212 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" event={"ID":"18f2082e-b7e1-4045-9853-b790e42cbe82","Type":"ContainerDied","Data":"1628bd42e2134f47892db44a712e3cce57f1d894108429995266d7e1defbfc0d"} Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.765548 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.787796 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-util\") pod \"18f2082e-b7e1-4045-9853-b790e42cbe82\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.787913 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqqgq\" (UniqueName: \"kubernetes.io/projected/18f2082e-b7e1-4045-9853-b790e42cbe82-kube-api-access-fqqgq\") pod \"18f2082e-b7e1-4045-9853-b790e42cbe82\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.787982 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-bundle\") pod \"18f2082e-b7e1-4045-9853-b790e42cbe82\" (UID: \"18f2082e-b7e1-4045-9853-b790e42cbe82\") " Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.789474 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-bundle" (OuterVolumeSpecName: "bundle") pod "18f2082e-b7e1-4045-9853-b790e42cbe82" (UID: "18f2082e-b7e1-4045-9853-b790e42cbe82"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.795063 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f2082e-b7e1-4045-9853-b790e42cbe82-kube-api-access-fqqgq" (OuterVolumeSpecName: "kube-api-access-fqqgq") pod "18f2082e-b7e1-4045-9853-b790e42cbe82" (UID: "18f2082e-b7e1-4045-9853-b790e42cbe82"). InnerVolumeSpecName "kube-api-access-fqqgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.806906 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-util" (OuterVolumeSpecName: "util") pod "18f2082e-b7e1-4045-9853-b790e42cbe82" (UID: "18f2082e-b7e1-4045-9853-b790e42cbe82"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.890028 4708 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-util\") on node \"crc\" DevicePath \"\"" Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.890085 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqqgq\" (UniqueName: \"kubernetes.io/projected/18f2082e-b7e1-4045-9853-b790e42cbe82-kube-api-access-fqqgq\") on node \"crc\" DevicePath \"\"" Feb 27 17:11:47 crc kubenswrapper[4708]: I0227 17:11:47.890106 4708 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/18f2082e-b7e1-4045-9853-b790e42cbe82-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:11:48 crc kubenswrapper[4708]: I0227 17:11:48.445180 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" event={"ID":"18f2082e-b7e1-4045-9853-b790e42cbe82","Type":"ContainerDied","Data":"7d6ad43959d4f193c08629046d53e3eebc07ba5f6bc223bbcaf24c19120110e7"} Feb 27 17:11:48 crc kubenswrapper[4708]: I0227 17:11:48.445234 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d6ad43959d4f193c08629046d53e3eebc07ba5f6bc223bbcaf24c19120110e7" Feb 27 17:11:48 crc kubenswrapper[4708]: I0227 17:11:48.445292 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.047715 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst"] Feb 27 17:11:55 crc kubenswrapper[4708]: E0227 17:11:55.048167 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerName="extract" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.048179 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerName="extract" Feb 27 17:11:55 crc kubenswrapper[4708]: E0227 17:11:55.048190 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerName="pull" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.048197 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerName="pull" Feb 27 17:11:55 crc kubenswrapper[4708]: E0227 17:11:55.048212 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerName="util" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.048218 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerName="util" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.048318 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="18f2082e-b7e1-4045-9853-b790e42cbe82" containerName="extract" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.048727 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.053965 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-fg85k" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.072934 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst"] Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.089004 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cl6dz\" (UniqueName: \"kubernetes.io/projected/c23c26b6-9e2d-46bf-9b7b-7e942361e3bc-kube-api-access-cl6dz\") pod \"openstack-operator-controller-init-7fb98c5bdd-sptst\" (UID: \"c23c26b6-9e2d-46bf-9b7b-7e942361e3bc\") " pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.190760 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cl6dz\" (UniqueName: \"kubernetes.io/projected/c23c26b6-9e2d-46bf-9b7b-7e942361e3bc-kube-api-access-cl6dz\") pod \"openstack-operator-controller-init-7fb98c5bdd-sptst\" (UID: \"c23c26b6-9e2d-46bf-9b7b-7e942361e3bc\") " pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.208946 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cl6dz\" (UniqueName: \"kubernetes.io/projected/c23c26b6-9e2d-46bf-9b7b-7e942361e3bc-kube-api-access-cl6dz\") pod \"openstack-operator-controller-init-7fb98c5bdd-sptst\" (UID: \"c23c26b6-9e2d-46bf-9b7b-7e942361e3bc\") " pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.366487 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" Feb 27 17:11:55 crc kubenswrapper[4708]: I0227 17:11:55.604441 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst"] Feb 27 17:11:56 crc kubenswrapper[4708]: I0227 17:11:56.520361 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" event={"ID":"c23c26b6-9e2d-46bf-9b7b-7e942361e3bc","Type":"ContainerStarted","Data":"1e0cd213f5b0fde717e36935c4baf7ec2391abe8cd289b5782fe486d3daf45da"} Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.149460 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4tms"] Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.151745 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4tms" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.154209 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.156128 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.161576 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.169202 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4tms"] Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.214093 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qwvn\" (UniqueName: \"kubernetes.io/projected/3343713b-a255-4d89-8501-22d02150e6ef-kube-api-access-5qwvn\") pod \"auto-csr-approver-29536872-h4tms\" (UID: \"3343713b-a255-4d89-8501-22d02150e6ef\") " pod="openshift-infra/auto-csr-approver-29536872-h4tms" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.316677 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qwvn\" (UniqueName: \"kubernetes.io/projected/3343713b-a255-4d89-8501-22d02150e6ef-kube-api-access-5qwvn\") pod \"auto-csr-approver-29536872-h4tms\" (UID: \"3343713b-a255-4d89-8501-22d02150e6ef\") " pod="openshift-infra/auto-csr-approver-29536872-h4tms" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.347638 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qwvn\" (UniqueName: \"kubernetes.io/projected/3343713b-a255-4d89-8501-22d02150e6ef-kube-api-access-5qwvn\") pod \"auto-csr-approver-29536872-h4tms\" (UID: \"3343713b-a255-4d89-8501-22d02150e6ef\") " pod="openshift-infra/auto-csr-approver-29536872-h4tms" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.524870 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4tms" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.552698 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" event={"ID":"c23c26b6-9e2d-46bf-9b7b-7e942361e3bc","Type":"ContainerStarted","Data":"2ba8eb9d4d3fe70b8da7acff747cad42dd3576cb57f0aca3f14f1cb5c16fc0e1"} Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.552886 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.594785 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" podStartSLOduration=1.322024677 podStartE2EDuration="5.594759215s" podCreationTimestamp="2026-02-27 17:11:55 +0000 UTC" firstStartedPulling="2026-02-27 17:11:55.613219985 +0000 UTC m=+1114.129017582" lastFinishedPulling="2026-02-27 17:11:59.885954493 +0000 UTC m=+1118.401752120" observedRunningTime="2026-02-27 17:12:00.591795031 +0000 UTC m=+1119.107592638" watchObservedRunningTime="2026-02-27 17:12:00.594759215 +0000 UTC m=+1119.110556842" Feb 27 17:12:00 crc kubenswrapper[4708]: I0227 17:12:00.989325 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4tms"] Feb 27 17:12:00 crc kubenswrapper[4708]: W0227 17:12:00.995679 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3343713b_a255_4d89_8501_22d02150e6ef.slice/crio-9643805b34402907177ecf8a76916f14f2cb1618ffd69670b2ef1430a126aa04 WatchSource:0}: Error finding container 9643805b34402907177ecf8a76916f14f2cb1618ffd69670b2ef1430a126aa04: Status 404 returned error can't find the container with id 9643805b34402907177ecf8a76916f14f2cb1618ffd69670b2ef1430a126aa04 Feb 27 17:12:01 crc kubenswrapper[4708]: I0227 17:12:01.565077 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536872-h4tms" event={"ID":"3343713b-a255-4d89-8501-22d02150e6ef","Type":"ContainerStarted","Data":"9643805b34402907177ecf8a76916f14f2cb1618ffd69670b2ef1430a126aa04"} Feb 27 17:12:03 crc kubenswrapper[4708]: I0227 17:12:03.584998 4708 generic.go:334] "Generic (PLEG): container finished" podID="3343713b-a255-4d89-8501-22d02150e6ef" containerID="b87cf45d3c166fc49119601a72fbee45c14fadffa35e3f34c1ac439e5db92d82" exitCode=0 Feb 27 17:12:03 crc kubenswrapper[4708]: I0227 17:12:03.585067 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536872-h4tms" event={"ID":"3343713b-a255-4d89-8501-22d02150e6ef","Type":"ContainerDied","Data":"b87cf45d3c166fc49119601a72fbee45c14fadffa35e3f34c1ac439e5db92d82"} Feb 27 17:12:04 crc kubenswrapper[4708]: I0227 17:12:04.916040 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4tms" Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.090996 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qwvn\" (UniqueName: \"kubernetes.io/projected/3343713b-a255-4d89-8501-22d02150e6ef-kube-api-access-5qwvn\") pod \"3343713b-a255-4d89-8501-22d02150e6ef\" (UID: \"3343713b-a255-4d89-8501-22d02150e6ef\") " Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.101158 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3343713b-a255-4d89-8501-22d02150e6ef-kube-api-access-5qwvn" (OuterVolumeSpecName: "kube-api-access-5qwvn") pod "3343713b-a255-4d89-8501-22d02150e6ef" (UID: "3343713b-a255-4d89-8501-22d02150e6ef"). InnerVolumeSpecName "kube-api-access-5qwvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.192682 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qwvn\" (UniqueName: \"kubernetes.io/projected/3343713b-a255-4d89-8501-22d02150e6ef-kube-api-access-5qwvn\") on node \"crc\" DevicePath \"\"" Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.371186 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-7fb98c5bdd-sptst" Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.605189 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536872-h4tms" event={"ID":"3343713b-a255-4d89-8501-22d02150e6ef","Type":"ContainerDied","Data":"9643805b34402907177ecf8a76916f14f2cb1618ffd69670b2ef1430a126aa04"} Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.605232 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9643805b34402907177ecf8a76916f14f2cb1618ffd69670b2ef1430a126aa04" Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.605267 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536872-h4tms" Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.632760 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:12:05 crc kubenswrapper[4708]: I0227 17:12:05.632829 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:12:06 crc kubenswrapper[4708]: I0227 17:12:06.003840 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-n7s88"] Feb 27 17:12:06 crc kubenswrapper[4708]: I0227 17:12:06.014786 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536866-n7s88"] Feb 27 17:12:06 crc kubenswrapper[4708]: I0227 17:12:06.243196 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2678f124-0296-46f6-9df8-ec03bde26be0" path="/var/lib/kubelet/pods/2678f124-0296-46f6-9df8-ec03bde26be0/volumes" Feb 27 17:12:23 crc kubenswrapper[4708]: I0227 17:12:23.464099 4708 scope.go:117] "RemoveContainer" containerID="a898a6e2591ffb60664c4d93c890b80f304b9ddffd7d4a5c0e14e049f690f07c" Feb 27 17:12:35 crc kubenswrapper[4708]: I0227 17:12:35.631137 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:12:35 crc kubenswrapper[4708]: I0227 17:12:35.631551 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.200061 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z"] Feb 27 17:12:45 crc kubenswrapper[4708]: E0227 17:12:45.200822 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3343713b-a255-4d89-8501-22d02150e6ef" containerName="oc" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.200835 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3343713b-a255-4d89-8501-22d02150e6ef" containerName="oc" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.200993 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3343713b-a255-4d89-8501-22d02150e6ef" containerName="oc" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.201467 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.203982 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-kgw66" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.205882 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.206467 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.207902 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-mwvqp" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.212979 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.216909 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.225766 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.226557 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.229619 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-jspvg" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.265068 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.294894 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.295738 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.303183 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.303932 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.304791 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hmj9s" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.307007 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.307759 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.309030 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-4zzsq" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.320460 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.321251 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.321330 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-zdvlt" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.324992 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.332571 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-sphwp" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.338435 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpl4\" (UniqueName: \"kubernetes.io/projected/038010da-affb-4db1-88e9-67e8ee1304cc-kube-api-access-htpl4\") pod \"designate-operator-controller-manager-5d87c9d997-wffwh\" (UID: \"038010da-affb-4db1-88e9-67e8ee1304cc\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.338493 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp5k6\" (UniqueName: \"kubernetes.io/projected/45efdeea-5e44-44b0-b9d0-e2cc8c441168-kube-api-access-lp5k6\") pod \"cinder-operator-controller-manager-55d77d7b5c-4kwbb\" (UID: \"45efdeea-5e44-44b0-b9d0-e2cc8c441168\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.338564 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2khb\" (UniqueName: \"kubernetes.io/projected/3fd10334-e172-4f8f-8f20-9d447937468f-kube-api-access-m2khb\") pod \"barbican-operator-controller-manager-6db6876945-2hw5z\" (UID: \"3fd10334-e172-4f8f-8f20-9d447937468f\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.345830 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.396212 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.442646 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.442691 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443570 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2khb\" (UniqueName: \"kubernetes.io/projected/3fd10334-e172-4f8f-8f20-9d447937468f-kube-api-access-m2khb\") pod \"barbican-operator-controller-manager-6db6876945-2hw5z\" (UID: \"3fd10334-e172-4f8f-8f20-9d447937468f\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443651 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn5dx\" (UniqueName: \"kubernetes.io/projected/5ea0106c-7f8b-493f-847f-da8b5ee33395-kube-api-access-nn5dx\") pod \"glance-operator-controller-manager-64db6967f8-5nrb9\" (UID: \"5ea0106c-7f8b-493f-847f-da8b5ee33395\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443673 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpl4\" (UniqueName: \"kubernetes.io/projected/038010da-affb-4db1-88e9-67e8ee1304cc-kube-api-access-htpl4\") pod \"designate-operator-controller-manager-5d87c9d997-wffwh\" (UID: \"038010da-affb-4db1-88e9-67e8ee1304cc\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443694 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp5k6\" (UniqueName: \"kubernetes.io/projected/45efdeea-5e44-44b0-b9d0-e2cc8c441168-kube-api-access-lp5k6\") pod \"cinder-operator-controller-manager-55d77d7b5c-4kwbb\" (UID: \"45efdeea-5e44-44b0-b9d0-e2cc8c441168\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443717 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzr5g\" (UniqueName: \"kubernetes.io/projected/b2819715-8c70-4b6f-8199-8e122f5b03e4-kube-api-access-jzr5g\") pod \"heat-operator-controller-manager-cf99c678f-c4pj6\" (UID: \"b2819715-8c70-4b6f-8199-8e122f5b03e4\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443753 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw7kq\" (UniqueName: \"kubernetes.io/projected/9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff-kube-api-access-mw7kq\") pod \"horizon-operator-controller-manager-78bc7f9bd9-wv64j\" (UID: \"9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443772 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.443796 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbnsd\" (UniqueName: \"kubernetes.io/projected/dde28522-3138-4c50-b3c5-1e26d61b96e1-kube-api-access-rbnsd\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.451615 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wp777"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.451930 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.452940 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.456704 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rhczx" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.456953 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-qlsnk" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.471579 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.471637 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp5k6\" (UniqueName: \"kubernetes.io/projected/45efdeea-5e44-44b0-b9d0-e2cc8c441168-kube-api-access-lp5k6\") pod \"cinder-operator-controller-manager-55d77d7b5c-4kwbb\" (UID: \"45efdeea-5e44-44b0-b9d0-e2cc8c441168\") " pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.481812 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.482960 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.482997 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2khb\" (UniqueName: \"kubernetes.io/projected/3fd10334-e172-4f8f-8f20-9d447937468f-kube-api-access-m2khb\") pod \"barbican-operator-controller-manager-6db6876945-2hw5z\" (UID: \"3fd10334-e172-4f8f-8f20-9d447937468f\") " pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.487152 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-v6h5n" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.488399 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpl4\" (UniqueName: \"kubernetes.io/projected/038010da-affb-4db1-88e9-67e8ee1304cc-kube-api-access-htpl4\") pod \"designate-operator-controller-manager-5d87c9d997-wffwh\" (UID: \"038010da-affb-4db1-88e9-67e8ee1304cc\") " pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.492566 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wp777"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.517422 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.539717 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.544466 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw7kq\" (UniqueName: \"kubernetes.io/projected/9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff-kube-api-access-mw7kq\") pod \"horizon-operator-controller-manager-78bc7f9bd9-wv64j\" (UID: \"9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.544501 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.544529 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rbnsd\" (UniqueName: \"kubernetes.io/projected/dde28522-3138-4c50-b3c5-1e26d61b96e1-kube-api-access-rbnsd\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.544556 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8g69\" (UniqueName: \"kubernetes.io/projected/f2e64742-9a09-4f5a-b8d5-ec938e7ac27b-kube-api-access-p8g69\") pod \"keystone-operator-controller-manager-55ffd4876b-n66z2\" (UID: \"f2e64742-9a09-4f5a-b8d5-ec938e7ac27b\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.544611 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn5dx\" (UniqueName: \"kubernetes.io/projected/5ea0106c-7f8b-493f-847f-da8b5ee33395-kube-api-access-nn5dx\") pod \"glance-operator-controller-manager-64db6967f8-5nrb9\" (UID: \"5ea0106c-7f8b-493f-847f-da8b5ee33395\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.544640 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzr5g\" (UniqueName: \"kubernetes.io/projected/b2819715-8c70-4b6f-8199-8e122f5b03e4-kube-api-access-jzr5g\") pod \"heat-operator-controller-manager-cf99c678f-c4pj6\" (UID: \"b2819715-8c70-4b6f-8199-8e122f5b03e4\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.544669 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t8bln\" (UniqueName: \"kubernetes.io/projected/f3ca9720-d51d-4c81-9aa0-3c21947be164-kube-api-access-t8bln\") pod \"ironic-operator-controller-manager-545456dc4-wp777\" (UID: \"f3ca9720-d51d-4c81-9aa0-3c21947be164\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" Feb 27 17:12:45 crc kubenswrapper[4708]: E0227 17:12:45.544682 4708 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:45 crc kubenswrapper[4708]: E0227 17:12:45.544741 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert podName:dde28522-3138-4c50-b3c5-1e26d61b96e1 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:46.044722467 +0000 UTC m=+1164.560520054 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert") pod "infra-operator-controller-manager-f7fcc58b9-sxmk5" (UID: "dde28522-3138-4c50-b3c5-1e26d61b96e1") : secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.546215 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.549891 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.554796 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-6wpf9" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.565996 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-d29bm"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.566970 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.569684 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw7kq\" (UniqueName: \"kubernetes.io/projected/9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff-kube-api-access-mw7kq\") pod \"horizon-operator-controller-manager-78bc7f9bd9-wv64j\" (UID: \"9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff\") " pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.570382 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbnsd\" (UniqueName: \"kubernetes.io/projected/dde28522-3138-4c50-b3c5-1e26d61b96e1-kube-api-access-rbnsd\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.570668 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-wdgbz" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.570808 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.571091 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.574584 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn5dx\" (UniqueName: \"kubernetes.io/projected/5ea0106c-7f8b-493f-847f-da8b5ee33395-kube-api-access-nn5dx\") pod \"glance-operator-controller-manager-64db6967f8-5nrb9\" (UID: \"5ea0106c-7f8b-493f-847f-da8b5ee33395\") " pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.581037 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.581617 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.603301 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.604110 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.608491 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-t7vkg" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.609638 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzr5g\" (UniqueName: \"kubernetes.io/projected/b2819715-8c70-4b6f-8199-8e122f5b03e4-kube-api-access-jzr5g\") pod \"heat-operator-controller-manager-cf99c678f-c4pj6\" (UID: \"b2819715-8c70-4b6f-8199-8e122f5b03e4\") " pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.611518 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-d29bm"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.613878 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.626289 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.632859 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.638687 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.643878 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.644715 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.646480 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhddh\" (UniqueName: \"kubernetes.io/projected/df5608da-0dbc-4335-b221-feb484afd410-kube-api-access-xhddh\") pod \"mariadb-operator-controller-manager-556b8b874-mcvwl\" (UID: \"df5608da-0dbc-4335-b221-feb484afd410\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.646516 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhs8n\" (UniqueName: \"kubernetes.io/projected/f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5-kube-api-access-hhs8n\") pod \"neutron-operator-controller-manager-54688575f-d29bm\" (UID: \"f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.646561 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t8bln\" (UniqueName: \"kubernetes.io/projected/f3ca9720-d51d-4c81-9aa0-3c21947be164-kube-api-access-t8bln\") pod \"ironic-operator-controller-manager-545456dc4-wp777\" (UID: \"f3ca9720-d51d-4c81-9aa0-3c21947be164\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.646637 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8g69\" (UniqueName: \"kubernetes.io/projected/f2e64742-9a09-4f5a-b8d5-ec938e7ac27b-kube-api-access-p8g69\") pod \"keystone-operator-controller-manager-55ffd4876b-n66z2\" (UID: \"f2e64742-9a09-4f5a-b8d5-ec938e7ac27b\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.646684 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkvxr\" (UniqueName: \"kubernetes.io/projected/1ade7297-180b-4c42-85b7-5edaf33dd0b4-kube-api-access-bkvxr\") pod \"manila-operator-controller-manager-67d996989d-kj8hq\" (UID: \"1ade7297-180b-4c42-85b7-5edaf33dd0b4\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.647218 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-hkp9x" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.662975 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8g69\" (UniqueName: \"kubernetes.io/projected/f2e64742-9a09-4f5a-b8d5-ec938e7ac27b-kube-api-access-p8g69\") pod \"keystone-operator-controller-manager-55ffd4876b-n66z2\" (UID: \"f2e64742-9a09-4f5a-b8d5-ec938e7ac27b\") " pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.666815 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t8bln\" (UniqueName: \"kubernetes.io/projected/f3ca9720-d51d-4c81-9aa0-3c21947be164-kube-api-access-t8bln\") pod \"ironic-operator-controller-manager-545456dc4-wp777\" (UID: \"f3ca9720-d51d-4c81-9aa0-3c21947be164\") " pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.671955 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.694805 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.696153 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.699009 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-s8xbr" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.699435 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.705537 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.706410 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.708208 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-nzxh2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.734236 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.744476 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.745443 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.749191 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-5sg7r" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.751569 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.754108 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljqk8\" (UniqueName: \"kubernetes.io/projected/bf787ac7-afe7-4705-a740-80d2f0d60054-kube-api-access-ljqk8\") pod \"octavia-operator-controller-manager-5d86c7ddb7-dqxzg\" (UID: \"bf787ac7-afe7-4705-a740-80d2f0d60054\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.754179 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hdrm\" (UniqueName: \"kubernetes.io/projected/156803c8-e795-452c-9244-b93c2b3af9e7-kube-api-access-8hdrm\") pod \"nova-operator-controller-manager-74b6b5dc96-vcjxj\" (UID: \"156803c8-e795-452c-9244-b93c2b3af9e7\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.754210 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkvxr\" (UniqueName: \"kubernetes.io/projected/1ade7297-180b-4c42-85b7-5edaf33dd0b4-kube-api-access-bkvxr\") pod \"manila-operator-controller-manager-67d996989d-kj8hq\" (UID: \"1ade7297-180b-4c42-85b7-5edaf33dd0b4\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.754349 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xhddh\" (UniqueName: \"kubernetes.io/projected/df5608da-0dbc-4335-b221-feb484afd410-kube-api-access-xhddh\") pod \"mariadb-operator-controller-manager-556b8b874-mcvwl\" (UID: \"df5608da-0dbc-4335-b221-feb484afd410\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.754385 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hhs8n\" (UniqueName: \"kubernetes.io/projected/f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5-kube-api-access-hhs8n\") pod \"neutron-operator-controller-manager-54688575f-d29bm\" (UID: \"f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.761890 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.762917 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.765117 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-9dd7m" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.768202 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.777382 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkvxr\" (UniqueName: \"kubernetes.io/projected/1ade7297-180b-4c42-85b7-5edaf33dd0b4-kube-api-access-bkvxr\") pod \"manila-operator-controller-manager-67d996989d-kj8hq\" (UID: \"1ade7297-180b-4c42-85b7-5edaf33dd0b4\") " pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.777382 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhs8n\" (UniqueName: \"kubernetes.io/projected/f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5-kube-api-access-hhs8n\") pod \"neutron-operator-controller-manager-54688575f-d29bm\" (UID: \"f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5\") " pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.782289 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xhddh\" (UniqueName: \"kubernetes.io/projected/df5608da-0dbc-4335-b221-feb484afd410-kube-api-access-xhddh\") pod \"mariadb-operator-controller-manager-556b8b874-mcvwl\" (UID: \"df5608da-0dbc-4335-b221-feb484afd410\") " pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.782376 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.785756 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.837069 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.838201 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.841115 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dq48v" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.846203 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.850495 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.855248 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.855380 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjkxm\" (UniqueName: \"kubernetes.io/projected/03b225c1-aa9b-4f83-b786-1c9c299ef456-kube-api-access-kjkxm\") pod \"placement-operator-controller-manager-648564c9fc-vq95w\" (UID: \"03b225c1-aa9b-4f83-b786-1c9c299ef456\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.855459 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ljqk8\" (UniqueName: \"kubernetes.io/projected/bf787ac7-afe7-4705-a740-80d2f0d60054-kube-api-access-ljqk8\") pod \"octavia-operator-controller-manager-5d86c7ddb7-dqxzg\" (UID: \"bf787ac7-afe7-4705-a740-80d2f0d60054\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.855528 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hdrm\" (UniqueName: \"kubernetes.io/projected/156803c8-e795-452c-9244-b93c2b3af9e7-kube-api-access-8hdrm\") pod \"nova-operator-controller-manager-74b6b5dc96-vcjxj\" (UID: \"156803c8-e795-452c-9244-b93c2b3af9e7\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.855652 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9b6zt\" (UniqueName: \"kubernetes.io/projected/c0bf6b0d-d70d-4498-a61f-cd7354439357-kube-api-access-9b6zt\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.855731 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vn7wn\" (UniqueName: \"kubernetes.io/projected/ae129c1e-ae9f-4cef-93fd-b186bf0eb275-kube-api-access-vn7wn\") pod \"swift-operator-controller-manager-9b9ff9f4d-8jdst\" (UID: \"ae129c1e-ae9f-4cef-93fd-b186bf0eb275\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.855830 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z56xw\" (UniqueName: \"kubernetes.io/projected/5cb187f0-85c4-48ef-90fb-6a6c896188e5-kube-api-access-z56xw\") pod \"ovn-operator-controller-manager-75684d597f-rj2g4\" (UID: \"5cb187f0-85c4-48ef-90fb-6a6c896188e5\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.868362 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.889945 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljqk8\" (UniqueName: \"kubernetes.io/projected/bf787ac7-afe7-4705-a740-80d2f0d60054-kube-api-access-ljqk8\") pod \"octavia-operator-controller-manager-5d86c7ddb7-dqxzg\" (UID: \"bf787ac7-afe7-4705-a740-80d2f0d60054\") " pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.916687 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hdrm\" (UniqueName: \"kubernetes.io/projected/156803c8-e795-452c-9244-b93c2b3af9e7-kube-api-access-8hdrm\") pod \"nova-operator-controller-manager-74b6b5dc96-vcjxj\" (UID: \"156803c8-e795-452c-9244-b93c2b3af9e7\") " pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.942970 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.946926 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.947933 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.949841 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-fr7ls" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.953453 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.955476 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.957327 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9b6zt\" (UniqueName: \"kubernetes.io/projected/c0bf6b0d-d70d-4498-a61f-cd7354439357-kube-api-access-9b6zt\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.957357 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vn7wn\" (UniqueName: \"kubernetes.io/projected/ae129c1e-ae9f-4cef-93fd-b186bf0eb275-kube-api-access-vn7wn\") pod \"swift-operator-controller-manager-9b9ff9f4d-8jdst\" (UID: \"ae129c1e-ae9f-4cef-93fd-b186bf0eb275\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.957386 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z56xw\" (UniqueName: \"kubernetes.io/projected/5cb187f0-85c4-48ef-90fb-6a6c896188e5-kube-api-access-z56xw\") pod \"ovn-operator-controller-manager-75684d597f-rj2g4\" (UID: \"5cb187f0-85c4-48ef-90fb-6a6c896188e5\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.957414 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rst9k\" (UniqueName: \"kubernetes.io/projected/037ffc6c-63a3-4848-9b83-e68944940401-kube-api-access-rst9k\") pod \"telemetry-operator-controller-manager-5c646dc97-69twh\" (UID: \"037ffc6c-63a3-4848-9b83-e68944940401\") " pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.957469 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.957493 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjkxm\" (UniqueName: \"kubernetes.io/projected/03b225c1-aa9b-4f83-b786-1c9c299ef456-kube-api-access-kjkxm\") pod \"placement-operator-controller-manager-648564c9fc-vq95w\" (UID: \"03b225c1-aa9b-4f83-b786-1c9c299ef456\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" Feb 27 17:12:45 crc kubenswrapper[4708]: E0227 17:12:45.958123 4708 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:45 crc kubenswrapper[4708]: E0227 17:12:45.958163 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert podName:c0bf6b0d-d70d-4498-a61f-cd7354439357 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:46.458149762 +0000 UTC m=+1164.973947349 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" (UID: "c0bf6b0d-d70d-4498-a61f-cd7354439357") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.965925 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.970458 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.981924 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vn7wn\" (UniqueName: \"kubernetes.io/projected/ae129c1e-ae9f-4cef-93fd-b186bf0eb275-kube-api-access-vn7wn\") pod \"swift-operator-controller-manager-9b9ff9f4d-8jdst\" (UID: \"ae129c1e-ae9f-4cef-93fd-b186bf0eb275\") " pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.984331 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjkxm\" (UniqueName: \"kubernetes.io/projected/03b225c1-aa9b-4f83-b786-1c9c299ef456-kube-api-access-kjkxm\") pod \"placement-operator-controller-manager-648564c9fc-vq95w\" (UID: \"03b225c1-aa9b-4f83-b786-1c9c299ef456\") " pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.985023 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9b6zt\" (UniqueName: \"kubernetes.io/projected/c0bf6b0d-d70d-4498-a61f-cd7354439357-kube-api-access-9b6zt\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.985542 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z56xw\" (UniqueName: \"kubernetes.io/projected/5cb187f0-85c4-48ef-90fb-6a6c896188e5-kube-api-access-z56xw\") pod \"ovn-operator-controller-manager-75684d597f-rj2g4\" (UID: \"5cb187f0-85c4-48ef-90fb-6a6c896188e5\") " pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.993898 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4"] Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.994958 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" Feb 27 17:12:45 crc kubenswrapper[4708]: I0227 17:12:45.997551 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-zm567" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.016344 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.026455 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.029278 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.031253 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-qbll6" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.031491 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.031678 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.034414 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.046148 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.047201 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.048631 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-ppvmw" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.049234 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.052211 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.059458 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.059531 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgft9\" (UniqueName: \"kubernetes.io/projected/9cf6d78e-38dd-4875-8fcc-6b34b93c9924-kube-api-access-vgft9\") pod \"test-operator-controller-manager-55b5ff4dbb-jfh6m\" (UID: \"9cf6d78e-38dd-4875-8fcc-6b34b93c9924\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.059595 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rst9k\" (UniqueName: \"kubernetes.io/projected/037ffc6c-63a3-4848-9b83-e68944940401-kube-api-access-rst9k\") pod \"telemetry-operator-controller-manager-5c646dc97-69twh\" (UID: \"037ffc6c-63a3-4848-9b83-e68944940401\") " pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.059861 4708 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.059906 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert podName:dde28522-3138-4c50-b3c5-1e26d61b96e1 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:47.05989186 +0000 UTC m=+1165.575689447 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert") pod "infra-operator-controller-manager-f7fcc58b9-sxmk5" (UID: "dde28522-3138-4c50-b3c5-1e26d61b96e1") : secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.073502 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.085198 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.089334 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rst9k\" (UniqueName: \"kubernetes.io/projected/037ffc6c-63a3-4848-9b83-e68944940401-kube-api-access-rst9k\") pod \"telemetry-operator-controller-manager-5c646dc97-69twh\" (UID: \"037ffc6c-63a3-4848-9b83-e68944940401\") " pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.160667 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmrv8\" (UniqueName: \"kubernetes.io/projected/025b2ef1-3f2f-413f-a6a0-c5d34cd27447-kube-api-access-nmrv8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wzdmr\" (UID: \"025b2ef1-3f2f-413f-a6a0-c5d34cd27447\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.160714 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.160789 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b97tw\" (UniqueName: \"kubernetes.io/projected/7a28ceb0-14d8-4fa0-a7ca-3921efcaba86-kube-api-access-b97tw\") pod \"watcher-operator-controller-manager-bccc79885-sjbv4\" (UID: \"7a28ceb0-14d8-4fa0-a7ca-3921efcaba86\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.160871 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.160899 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgft9\" (UniqueName: \"kubernetes.io/projected/9cf6d78e-38dd-4875-8fcc-6b34b93c9924-kube-api-access-vgft9\") pod \"test-operator-controller-manager-55b5ff4dbb-jfh6m\" (UID: \"9cf6d78e-38dd-4875-8fcc-6b34b93c9924\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.160955 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dr6bp\" (UniqueName: \"kubernetes.io/projected/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-kube-api-access-dr6bp\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.162078 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.184429 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgft9\" (UniqueName: \"kubernetes.io/projected/9cf6d78e-38dd-4875-8fcc-6b34b93c9924-kube-api-access-vgft9\") pod \"test-operator-controller-manager-55b5ff4dbb-jfh6m\" (UID: \"9cf6d78e-38dd-4875-8fcc-6b34b93c9924\") " pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.262779 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dr6bp\" (UniqueName: \"kubernetes.io/projected/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-kube-api-access-dr6bp\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.263251 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmrv8\" (UniqueName: \"kubernetes.io/projected/025b2ef1-3f2f-413f-a6a0-c5d34cd27447-kube-api-access-nmrv8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wzdmr\" (UID: \"025b2ef1-3f2f-413f-a6a0-c5d34cd27447\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.263273 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.263326 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b97tw\" (UniqueName: \"kubernetes.io/projected/7a28ceb0-14d8-4fa0-a7ca-3921efcaba86-kube-api-access-b97tw\") pod \"watcher-operator-controller-manager-bccc79885-sjbv4\" (UID: \"7a28ceb0-14d8-4fa0-a7ca-3921efcaba86\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.263351 4708 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.263391 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.263485 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:46.763466659 +0000 UTC m=+1165.279264246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.263904 4708 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.263966 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:46.763956713 +0000 UTC m=+1165.279754290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "metrics-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.328454 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dr6bp\" (UniqueName: \"kubernetes.io/projected/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-kube-api-access-dr6bp\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.332411 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b97tw\" (UniqueName: \"kubernetes.io/projected/7a28ceb0-14d8-4fa0-a7ca-3921efcaba86-kube-api-access-b97tw\") pod \"watcher-operator-controller-manager-bccc79885-sjbv4\" (UID: \"7a28ceb0-14d8-4fa0-a7ca-3921efcaba86\") " pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.332629 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmrv8\" (UniqueName: \"kubernetes.io/projected/025b2ef1-3f2f-413f-a6a0-c5d34cd27447-kube-api-access-nmrv8\") pod \"rabbitmq-cluster-operator-manager-668c99d594-wzdmr\" (UID: \"025b2ef1-3f2f-413f-a6a0-c5d34cd27447\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.349540 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.363440 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.368786 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.400178 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.449546 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.465799 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.466032 4708 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.466084 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert podName:c0bf6b0d-d70d-4498-a61f-cd7354439357 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:47.466069641 +0000 UTC m=+1165.981867228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" (UID: "c0bf6b0d-d70d-4498-a61f-cd7354439357") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.469877 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.484391 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.497151 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.691113 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.707202 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb"] Feb 27 17:12:46 crc kubenswrapper[4708]: W0227 17:12:46.722515 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45efdeea_5e44_44b0_b9d0_e2cc8c441168.slice/crio-87d4d03288e5d648fcbfe5e8dd23a43d3e268aef879a823bba5190dc68076d5a WatchSource:0}: Error finding container 87d4d03288e5d648fcbfe5e8dd23a43d3e268aef879a823bba5190dc68076d5a: Status 404 returned error can't find the container with id 87d4d03288e5d648fcbfe5e8dd23a43d3e268aef879a823bba5190dc68076d5a Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.770533 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.770655 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.770806 4708 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.770868 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:47.770841633 +0000 UTC m=+1166.286639220 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "webhook-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.771189 4708 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: E0227 17:12:46.771229 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:47.771209974 +0000 UTC m=+1166.287007561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "metrics-server-cert" not found Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.820324 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-54688575f-d29bm"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.859720 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-545456dc4-wp777"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.906788 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.969697 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl"] Feb 27 17:12:46 crc kubenswrapper[4708]: I0227 17:12:46.997931 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" event={"ID":"df5608da-0dbc-4335-b221-feb484afd410","Type":"ContainerStarted","Data":"3e836ca8c9fb7dc88a8f49473c29db7ba1afd051bab3a5c9d12d8a8fd81c929b"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.002592 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" event={"ID":"b2819715-8c70-4b6f-8199-8e122f5b03e4","Type":"ContainerStarted","Data":"8860e666e07e64dc7b1003219a36bc2524797e06a945752882118faa33305dc2"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.016304 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" event={"ID":"f3ca9720-d51d-4c81-9aa0-3c21947be164","Type":"ContainerStarted","Data":"c80423e6912277ba52d5f0d82d99833b9c0cceed1cdeed666efae6e40d7398e1"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.021117 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst"] Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.022202 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" event={"ID":"f2e64742-9a09-4f5a-b8d5-ec938e7ac27b","Type":"ContainerStarted","Data":"7d88371caaf2dc5a0af35b3ec67bc6ed4a0c859c95c0284dea9bf8d57d397f92"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.023250 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" event={"ID":"3fd10334-e172-4f8f-8f20-9d447937468f","Type":"ContainerStarted","Data":"662a68f4fd941fddd0322a97dbeccd43212bc0fca9efb8f5812989aab14e4a1e"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.024107 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" event={"ID":"45efdeea-5e44-44b0-b9d0-e2cc8c441168","Type":"ContainerStarted","Data":"87d4d03288e5d648fcbfe5e8dd23a43d3e268aef879a823bba5190dc68076d5a"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.026549 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" event={"ID":"156803c8-e795-452c-9244-b93c2b3af9e7","Type":"ContainerStarted","Data":"c9c5246fad16cef1610514514bef60a06aa774b0b48568ead47e515be95a74cd"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.027718 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" event={"ID":"038010da-affb-4db1-88e9-67e8ee1304cc","Type":"ContainerStarted","Data":"65e9091ffd561ab0a0a4872dc59da763870a6505d770a4c6fbc9167bc17b75d5"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.031458 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" event={"ID":"5ea0106c-7f8b-493f-847f-da8b5ee33395","Type":"ContainerStarted","Data":"1b81294a640d5eb598d85f0bc37f1557a4c869d9b7309dc03f958c6d778c2d0e"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.032385 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" event={"ID":"9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff","Type":"ContainerStarted","Data":"c335a4f2905c55d35de7d4b33015bfdb43998e0e30b1afe8719e5c06e4639a8f"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.033243 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" event={"ID":"f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5","Type":"ContainerStarted","Data":"dd4de3ef274271024214a32808328c3f3682dd493abd372c8f8f16d5369fd8bd"} Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.034220 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w"] Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.045467 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq"] Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.059207 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vgft9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-55b5ff4dbb-jfh6m_openstack-operators(9cf6d78e-38dd-4875-8fcc-6b34b93c9924): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.062712 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m"] Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.062778 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" podUID="9cf6d78e-38dd-4875-8fcc-6b34b93c9924" Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.075137 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.075486 4708 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.075572 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert podName:dde28522-3138-4c50-b3c5-1e26d61b96e1 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:49.075552844 +0000 UTC m=+1167.591350431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert") pod "infra-operator-controller-manager-f7fcc58b9-sxmk5" (UID: "dde28522-3138-4c50-b3c5-1e26d61b96e1") : secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.080705 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-kjkxm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod placement-operator-controller-manager-648564c9fc-vq95w_openstack-operators(03b225c1-aa9b-4f83-b786-1c9c299ef456): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.082080 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" podUID="03b225c1-aa9b-4f83-b786-1c9c299ef456" Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.094260 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh"] Feb 27 17:12:47 crc kubenswrapper[4708]: W0227 17:12:47.098904 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod037ffc6c_63a3_4848_9b83_e68944940401.slice/crio-703ca5a3b1ed9e5d1bf201b4702778b09fa58566e2dc4fd53d1f67b8bf5baeff WatchSource:0}: Error finding container 703ca5a3b1ed9e5d1bf201b4702778b09fa58566e2dc4fd53d1f67b8bf5baeff: Status 404 returned error can't find the container with id 703ca5a3b1ed9e5d1bf201b4702778b09fa58566e2dc4fd53d1f67b8bf5baeff Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.101736 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.129.56.173:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rst9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-5c646dc97-69twh_openstack-operators(037ffc6c-63a3-4848-9b83-e68944940401): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.104072 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" podUID="037ffc6c-63a3-4848-9b83-e68944940401" Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.214330 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4"] Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.233432 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg"] Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.239950 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4"] Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.244188 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr"] Feb 27 17:12:47 crc kubenswrapper[4708]: W0227 17:12:47.248368 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf787ac7_afe7_4705_a740_80d2f0d60054.slice/crio-53089672792b4298677a216eafcb40eeacf76265aac6f76fd1fe44e1f41c48cd WatchSource:0}: Error finding container 53089672792b4298677a216eafcb40eeacf76265aac6f76fd1fe44e1f41c48cd: Status 404 returned error can't find the container with id 53089672792b4298677a216eafcb40eeacf76265aac6f76fd1fe44e1f41c48cd Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.253020 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ljqk8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod octavia-operator-controller-manager-5d86c7ddb7-dqxzg_openstack-operators(bf787ac7-afe7-4705-a740-80d2f0d60054): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.253227 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-b97tw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-bccc79885-sjbv4_openstack-operators(7a28ceb0-14d8-4fa0-a7ca-3921efcaba86): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.254621 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" podUID="7a28ceb0-14d8-4fa0-a7ca-3921efcaba86" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.254691 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" podUID="bf787ac7-afe7-4705-a740-80d2f0d60054" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.254718 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nmrv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-wzdmr_openstack-operators(025b2ef1-3f2f-413f-a6a0-c5d34cd27447): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.256562 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" podUID="025b2ef1-3f2f-413f-a6a0-c5d34cd27447" Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.485549 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.485772 4708 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.486096 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert podName:c0bf6b0d-d70d-4498-a61f-cd7354439357 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:49.486079618 +0000 UTC m=+1168.001877205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" (UID: "c0bf6b0d-d70d-4498-a61f-cd7354439357") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.794464 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:47 crc kubenswrapper[4708]: I0227 17:12:47.794611 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.794740 4708 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.794887 4708 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.794910 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:49.794830101 +0000 UTC m=+1168.310627728 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "webhook-server-cert" not found Feb 27 17:12:47 crc kubenswrapper[4708]: E0227 17:12:47.795021 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:49.794981845 +0000 UTC m=+1168.310779512 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "metrics-server-cert" not found Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.041680 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" event={"ID":"1ade7297-180b-4c42-85b7-5edaf33dd0b4","Type":"ContainerStarted","Data":"3b8fab4d0f8ae32699586b79d534c32a866d737b6cb02e6c84529b3f0f28edd0"} Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.042900 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" event={"ID":"9cf6d78e-38dd-4875-8fcc-6b34b93c9924","Type":"ContainerStarted","Data":"2e6b8d9afa754ca8625f6178956a297358da33989401f68bc7cb0ffd9b6220bd"} Feb 27 17:12:48 crc kubenswrapper[4708]: E0227 17:12:48.044473 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" podUID="9cf6d78e-38dd-4875-8fcc-6b34b93c9924" Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.044705 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" event={"ID":"bf787ac7-afe7-4705-a740-80d2f0d60054","Type":"ContainerStarted","Data":"53089672792b4298677a216eafcb40eeacf76265aac6f76fd1fe44e1f41c48cd"} Feb 27 17:12:48 crc kubenswrapper[4708]: E0227 17:12:48.047029 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" podUID="bf787ac7-afe7-4705-a740-80d2f0d60054" Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.050322 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" event={"ID":"037ffc6c-63a3-4848-9b83-e68944940401","Type":"ContainerStarted","Data":"703ca5a3b1ed9e5d1bf201b4702778b09fa58566e2dc4fd53d1f67b8bf5baeff"} Feb 27 17:12:48 crc kubenswrapper[4708]: E0227 17:12:48.051389 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.173:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" podUID="037ffc6c-63a3-4848-9b83-e68944940401" Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.052724 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" event={"ID":"03b225c1-aa9b-4f83-b786-1c9c299ef456","Type":"ContainerStarted","Data":"b7861fc1e90749a0aa0db8c23dbad722269e2397707e03dc3a6e26d5c596b29b"} Feb 27 17:12:48 crc kubenswrapper[4708]: E0227 17:12:48.058506 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" podUID="03b225c1-aa9b-4f83-b786-1c9c299ef456" Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.058735 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" event={"ID":"7a28ceb0-14d8-4fa0-a7ca-3921efcaba86","Type":"ContainerStarted","Data":"72660ef4a0b492134cb256393ebebab073c7a8b28d40ae504430fcb0e6108e12"} Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.064693 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" event={"ID":"ae129c1e-ae9f-4cef-93fd-b186bf0eb275","Type":"ContainerStarted","Data":"b655037ee58d699b20a16dc3e91a58df7833b031b4c371cca134fc9d96b08dd4"} Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.066224 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" event={"ID":"5cb187f0-85c4-48ef-90fb-6a6c896188e5","Type":"ContainerStarted","Data":"592bfac15fb3706edbabd001d75c1df205baa3a61b9e03a0e06ced889b4b2163"} Feb 27 17:12:48 crc kubenswrapper[4708]: E0227 17:12:48.068696 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" podUID="7a28ceb0-14d8-4fa0-a7ca-3921efcaba86" Feb 27 17:12:48 crc kubenswrapper[4708]: I0227 17:12:48.069841 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" event={"ID":"025b2ef1-3f2f-413f-a6a0-c5d34cd27447","Type":"ContainerStarted","Data":"e1a5007c86ff139fd4d219d2c8c6392a2e350811ce09aa61c9005d1943c08a2a"} Feb 27 17:12:48 crc kubenswrapper[4708]: E0227 17:12:48.071249 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" podUID="025b2ef1-3f2f-413f-a6a0-c5d34cd27447" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.089076 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.129.56.173:5001/openstack-k8s-operators/telemetry-operator:39a4be8a175d9e84fa6ba159f906a95524540b13\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" podUID="037ffc6c-63a3-4848-9b83-e68944940401" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.089746 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/placement-operator@sha256:bb939885bd04593ad03af901adb77ee2a2d18529b328c23288c7cc7a2ba5282e\\\"\"" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" podUID="03b225c1-aa9b-4f83-b786-1c9c299ef456" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.089878 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/octavia-operator@sha256:2d59045b8d8e6f9c5483c4fdda7c5057218d553200dc4bcf26789980ac1d9abd\\\"\"" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" podUID="bf787ac7-afe7-4705-a740-80d2f0d60054" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.089934 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:9d03f03aa9a460f1fcac8875064808c03e4ecd0388873bbfb9c7dc58331f3968\\\"\"" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" podUID="9cf6d78e-38dd-4875-8fcc-6b34b93c9924" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.089984 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" podUID="025b2ef1-3f2f-413f-a6a0-c5d34cd27447" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.090031 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:06311600a491c689493552e7ff26e36df740fa4e7c143fca874bef19f24afb97\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" podUID="7a28ceb0-14d8-4fa0-a7ca-3921efcaba86" Feb 27 17:12:49 crc kubenswrapper[4708]: I0227 17:12:49.131427 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.131635 4708 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.131677 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert podName:dde28522-3138-4c50-b3c5-1e26d61b96e1 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:53.131663404 +0000 UTC m=+1171.647460991 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert") pod "infra-operator-controller-manager-f7fcc58b9-sxmk5" (UID: "dde28522-3138-4c50-b3c5-1e26d61b96e1") : secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:49 crc kubenswrapper[4708]: I0227 17:12:49.537405 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.537578 4708 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.537649 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert podName:c0bf6b0d-d70d-4498-a61f-cd7354439357 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:53.537631702 +0000 UTC m=+1172.053429289 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" (UID: "c0bf6b0d-d70d-4498-a61f-cd7354439357") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:49 crc kubenswrapper[4708]: I0227 17:12:49.841339 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:49 crc kubenswrapper[4708]: I0227 17:12:49.841417 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.841544 4708 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.841608 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:53.841592539 +0000 UTC m=+1172.357390126 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "webhook-server-cert" not found Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.841550 4708 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 17:12:49 crc kubenswrapper[4708]: E0227 17:12:49.842053 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:12:53.842033462 +0000 UTC m=+1172.357831049 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "metrics-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: I0227 17:12:53.205038 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.205321 4708 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.205643 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert podName:dde28522-3138-4c50-b3c5-1e26d61b96e1 nodeName:}" failed. No retries permitted until 2026-02-27 17:13:01.205614862 +0000 UTC m=+1179.721412459 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert") pod "infra-operator-controller-manager-f7fcc58b9-sxmk5" (UID: "dde28522-3138-4c50-b3c5-1e26d61b96e1") : secret "infra-operator-webhook-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: I0227 17:12:53.611201 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.611374 4708 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.611427 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert podName:c0bf6b0d-d70d-4498-a61f-cd7354439357 nodeName:}" failed. No retries permitted until 2026-02-27 17:13:01.611411736 +0000 UTC m=+1180.127209323 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert") pod "openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" (UID: "c0bf6b0d-d70d-4498-a61f-cd7354439357") : secret "openstack-baremetal-operator-webhook-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: I0227 17:12:53.915876 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:53 crc kubenswrapper[4708]: I0227 17:12:53.916009 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.916194 4708 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.916190 4708 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.916274 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:13:01.916254028 +0000 UTC m=+1180.432051615 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "webhook-server-cert" not found Feb 27 17:12:53 crc kubenswrapper[4708]: E0227 17:12:53.916322 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:13:01.916286759 +0000 UTC m=+1180.432084386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "metrics-server-cert" not found Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.222777 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c" Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.223659 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z56xw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-75684d597f-rj2g4_openstack-operators(5cb187f0-85c4-48ef-90fb-6a6c896188e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.224945 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" podUID="5cb187f0-85c4-48ef-90fb-6a6c896188e5" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.241305 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.253360 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/dde28522-3138-4c50-b3c5-1e26d61b96e1-cert\") pod \"infra-operator-controller-manager-f7fcc58b9-sxmk5\" (UID: \"dde28522-3138-4c50-b3c5-1e26d61b96e1\") " pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.279300 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.646621 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.669967 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/c0bf6b0d-d70d-4498-a61f-cd7354439357-cert\") pod \"openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2\" (UID: \"c0bf6b0d-d70d-4498-a61f-cd7354439357\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.891684 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3" Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.891899 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-lp5k6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-55d77d7b5c-4kwbb_openstack-operators(45efdeea-5e44-44b0-b9d0-e2cc8c441168): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.893106 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" podUID="45efdeea-5e44-44b0-b9d0-e2cc8c441168" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.934839 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.970459 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.970537 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.970635 4708 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Feb 27 17:13:01 crc kubenswrapper[4708]: E0227 17:13:01.970709 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs podName:8e7ab31e-da8a-4ae8-a4c1-940312416cc3 nodeName:}" failed. No retries permitted until 2026-02-27 17:13:17.970691724 +0000 UTC m=+1196.486489311 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs") pod "openstack-operator-controller-manager-b89df8bf4-c7qtl" (UID: "8e7ab31e-da8a-4ae8-a4c1-940312416cc3") : secret "webhook-server-cert" not found Feb 27 17:13:01 crc kubenswrapper[4708]: I0227 17:13:01.975068 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-metrics-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:02 crc kubenswrapper[4708]: E0227 17:13:02.222382 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:9f73c84a9581b5739d8da333c7b64403d7b7ca284b22c624d0effe07f3d2819c\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" podUID="5cb187f0-85c4-48ef-90fb-6a6c896188e5" Feb 27 17:13:02 crc kubenswrapper[4708]: E0227 17:13:02.222921 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/cinder-operator@sha256:7961c67cfc87de69055f8330771af625f73d857426c4bb17ebb888ead843fff3\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" podUID="45efdeea-5e44-44b0-b9d0-e2cc8c441168" Feb 27 17:13:02 crc kubenswrapper[4708]: E0227 17:13:02.789148 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214" Feb 27 17:13:02 crc kubenswrapper[4708]: E0227 17:13:02.789345 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-htpl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod designate-operator-controller-manager-5d87c9d997-wffwh_openstack-operators(038010da-affb-4db1-88e9-67e8ee1304cc): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:13:02 crc kubenswrapper[4708]: E0227 17:13:02.790660 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" podUID="038010da-affb-4db1-88e9-67e8ee1304cc" Feb 27 17:13:03 crc kubenswrapper[4708]: E0227 17:13:03.230217 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/designate-operator@sha256:508859beb0e5b69169393dbb0039dc03a9d4ba05f16f6ff74f9b25e19d446214\\\"\"" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" podUID="038010da-affb-4db1-88e9-67e8ee1304cc" Feb 27 17:13:03 crc kubenswrapper[4708]: E0227 17:13:03.699667 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4" Feb 27 17:13:03 crc kubenswrapper[4708]: E0227 17:13:03.699900 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hhs8n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-54688575f-d29bm_openstack-operators(f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:13:03 crc kubenswrapper[4708]: E0227 17:13:03.701034 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" podUID="f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5" Feb 27 17:13:04 crc kubenswrapper[4708]: E0227 17:13:04.241528 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:b242403a27609ac87a0ed3a7dd788aceaf8f3da3620981cf5e000d56862d77a4\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" podUID="f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5" Feb 27 17:13:04 crc kubenswrapper[4708]: E0227 17:13:04.307735 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7" Feb 27 17:13:04 crc kubenswrapper[4708]: E0227 17:13:04.307938 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vn7wn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-9b9ff9f4d-8jdst_openstack-operators(ae129c1e-ae9f-4cef-93fd-b186bf0eb275): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:13:04 crc kubenswrapper[4708]: E0227 17:13:04.309167 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" podUID="ae129c1e-ae9f-4cef-93fd-b186bf0eb275" Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.084255 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84" Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.084542 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8hdrm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-74b6b5dc96-vcjxj_openstack-operators(156803c8-e795-452c-9244-b93c2b3af9e7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.085657 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" podUID="156803c8-e795-452c-9244-b93c2b3af9e7" Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.249946 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:f309cdea8084a4b1e8cbcd732d6e250fd93c55cfd1b48ba9026907c8591faab7\\\"\"" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" podUID="ae129c1e-ae9f-4cef-93fd-b186bf0eb275" Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.251770 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:172f24bd4603ac3498536a8a2c8fffb07cf9113dd52bc132778ea0aa275c6b84\\\"\"" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" podUID="156803c8-e795-452c-9244-b93c2b3af9e7" Feb 27 17:13:05 crc kubenswrapper[4708]: I0227 17:13:05.631555 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:13:05 crc kubenswrapper[4708]: I0227 17:13:05.631840 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:13:05 crc kubenswrapper[4708]: I0227 17:13:05.631890 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:13:05 crc kubenswrapper[4708]: I0227 17:13:05.632735 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"39dbd7797d34062ee99cfd72758adf14eea4f4680611bae0c80a2a4882b14a2d"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:13:05 crc kubenswrapper[4708]: I0227 17:13:05.632785 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://39dbd7797d34062ee99cfd72758adf14eea4f4680611bae0c80a2a4882b14a2d" gracePeriod=600 Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.696744 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735" Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.696922 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-p8g69,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod keystone-operator-controller-manager-55ffd4876b-n66z2_openstack-operators(f2e64742-9a09-4f5a-b8d5-ec938e7ac27b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:13:05 crc kubenswrapper[4708]: E0227 17:13:05.698087 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" podUID="f2e64742-9a09-4f5a-b8d5-ec938e7ac27b" Feb 27 17:13:06 crc kubenswrapper[4708]: I0227 17:13:06.254384 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="39dbd7797d34062ee99cfd72758adf14eea4f4680611bae0c80a2a4882b14a2d" exitCode=0 Feb 27 17:13:06 crc kubenswrapper[4708]: I0227 17:13:06.254906 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"39dbd7797d34062ee99cfd72758adf14eea4f4680611bae0c80a2a4882b14a2d"} Feb 27 17:13:06 crc kubenswrapper[4708]: I0227 17:13:06.254936 4708 scope.go:117] "RemoveContainer" containerID="1b93b6ea88dbf15ec38dc361eee21fbc69cdb9df7c63344796e2852a98085a90" Feb 27 17:13:06 crc kubenswrapper[4708]: E0227 17:13:06.255784 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/keystone-operator@sha256:12fa31d2a2dfe1a832c6a2c0eb58876a3a62595a1a1f49b13c2a1f9b6d378735\\\"\"" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" podUID="f2e64742-9a09-4f5a-b8d5-ec938e7ac27b" Feb 27 17:13:07 crc kubenswrapper[4708]: I0227 17:13:07.398933 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2"] Feb 27 17:13:10 crc kubenswrapper[4708]: I0227 17:13:10.831355 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5"] Feb 27 17:13:11 crc kubenswrapper[4708]: I0227 17:13:11.307456 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" event={"ID":"dde28522-3138-4c50-b3c5-1e26d61b96e1","Type":"ContainerStarted","Data":"da8606aeead48fe1222902092d6f1f1f8277d4c43a8eb1d64bb0413796465f98"} Feb 27 17:13:11 crc kubenswrapper[4708]: I0227 17:13:11.310748 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" event={"ID":"c0bf6b0d-d70d-4498-a61f-cd7354439357","Type":"ContainerStarted","Data":"e531bacba6113395624961af47a2bd4be1d0ab758dd65638747768746cd6d746"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.327031 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" event={"ID":"3fd10334-e172-4f8f-8f20-9d447937468f","Type":"ContainerStarted","Data":"f9f61bde86f809f84888a7ad7421a074e41e6d7b7a5f56d33c7e5ff9b1918efc"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.328270 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.349155 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" event={"ID":"df5608da-0dbc-4335-b221-feb484afd410","Type":"ContainerStarted","Data":"849f8e1768a33b416e8de84f2216fd21f9db59a8e900d924f1ba8b85d1f75788"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.349788 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.372194 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" event={"ID":"5ea0106c-7f8b-493f-847f-da8b5ee33395","Type":"ContainerStarted","Data":"5fc8ebc2fe3d145aab004a34d38ac2ef3e278963b5b5d0b54c2f9080f84b1d84"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.372628 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.403478 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" event={"ID":"037ffc6c-63a3-4848-9b83-e68944940401","Type":"ContainerStarted","Data":"ef8f1297ac0ac5df9fc55301eddbb63138a275911ff0b46539210c3b264e13f5"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.404055 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.408885 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" podStartSLOduration=8.696786445 podStartE2EDuration="27.408876562s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.95843159 +0000 UTC m=+1165.474229177" lastFinishedPulling="2026-02-27 17:13:05.670521707 +0000 UTC m=+1184.186319294" observedRunningTime="2026-02-27 17:13:12.405645491 +0000 UTC m=+1190.921443078" watchObservedRunningTime="2026-02-27 17:13:12.408876562 +0000 UTC m=+1190.924674149" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.423665 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" event={"ID":"025b2ef1-3f2f-413f-a6a0-c5d34cd27447","Type":"ContainerStarted","Data":"13802186439f633ad4c7c3af8122172192de0dc8d22752e8d7111df9b454f49f"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.440018 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" event={"ID":"03b225c1-aa9b-4f83-b786-1c9c299ef456","Type":"ContainerStarted","Data":"d89fdf77cc884119ecd89f622554d827519d4b6bc1398c7a44929e48a63ad4fd"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.440718 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.459590 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" event={"ID":"1ade7297-180b-4c42-85b7-5edaf33dd0b4","Type":"ContainerStarted","Data":"fda61cea16600c9ad90a50414dd50f179b0e5ccbe3b0a6209ab48e512a356cd5"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.459627 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.492463 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" event={"ID":"b2819715-8c70-4b6f-8199-8e122f5b03e4","Type":"ContainerStarted","Data":"dc3ea5f3d302aea3aca070f262a174b65b4930810ae40af001ddad3e6776bb9d"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.492551 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.503757 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" podStartSLOduration=8.963847487 podStartE2EDuration="27.503742653s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.539026965 +0000 UTC m=+1165.054824552" lastFinishedPulling="2026-02-27 17:13:05.078922131 +0000 UTC m=+1183.594719718" observedRunningTime="2026-02-27 17:13:12.450023006 +0000 UTC m=+1190.965820603" watchObservedRunningTime="2026-02-27 17:13:12.503742653 +0000 UTC m=+1191.019540240" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.519019 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" event={"ID":"7a28ceb0-14d8-4fa0-a7ca-3921efcaba86","Type":"ContainerStarted","Data":"ee8e05370428ef938afc2f4c15a41e3b008115fa9dcc6e51f83bb85f76aff85b"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.519616 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.538579 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" podStartSLOduration=8.961970176 podStartE2EDuration="27.53856482s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.502194713 +0000 UTC m=+1165.017992300" lastFinishedPulling="2026-02-27 17:13:05.078789357 +0000 UTC m=+1183.594586944" observedRunningTime="2026-02-27 17:13:12.504157775 +0000 UTC m=+1191.019955362" watchObservedRunningTime="2026-02-27 17:13:12.53856482 +0000 UTC m=+1191.054362407" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.539168 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" podStartSLOduration=3.441200446 podStartE2EDuration="27.539163817s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.080632667 +0000 UTC m=+1165.596430254" lastFinishedPulling="2026-02-27 17:13:11.178596038 +0000 UTC m=+1189.694393625" observedRunningTime="2026-02-27 17:13:12.537677975 +0000 UTC m=+1191.053475562" watchObservedRunningTime="2026-02-27 17:13:12.539163817 +0000 UTC m=+1191.054961404" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.543400 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" event={"ID":"bf787ac7-afe7-4705-a740-80d2f0d60054","Type":"ContainerStarted","Data":"f24d7558f1ba887e0f017d97ec6283734e324bd24d9f62c2f7389447de02f40f"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.544055 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.577925 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" podStartSLOduration=3.454546196 podStartE2EDuration="27.577896074s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.1015777 +0000 UTC m=+1165.617375287" lastFinishedPulling="2026-02-27 17:13:11.224927538 +0000 UTC m=+1189.740725165" observedRunningTime="2026-02-27 17:13:12.570168417 +0000 UTC m=+1191.085966004" watchObservedRunningTime="2026-02-27 17:13:12.577896074 +0000 UTC m=+1191.093693651" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.597939 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" event={"ID":"f3ca9720-d51d-4c81-9aa0-3c21947be164","Type":"ContainerStarted","Data":"5f21a2d36931ced18483d8e4c6686ab0353ebf62b995bdd1e38c0c7aadf3ee44"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.598875 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.634312 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-wzdmr" podStartSLOduration=3.669117372 podStartE2EDuration="27.634295356s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.254600599 +0000 UTC m=+1165.770398176" lastFinishedPulling="2026-02-27 17:13:11.219778573 +0000 UTC m=+1189.735576160" observedRunningTime="2026-02-27 17:13:12.629120401 +0000 UTC m=+1191.144917988" watchObservedRunningTime="2026-02-27 17:13:12.634295356 +0000 UTC m=+1191.150092943" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.660207 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"c1a4a3b793414b4b10c54d77ec77375b6657e6d822660a8ebe494db8ea78162c"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.667792 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" event={"ID":"9cf6d78e-38dd-4875-8fcc-6b34b93c9924","Type":"ContainerStarted","Data":"e2183bb56a160f179aae28d12bf8b02271318a656352dc9857271961e63c850e"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.668504 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.669929 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" event={"ID":"9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff","Type":"ContainerStarted","Data":"1afb3c7a8da32dec1b78496176a018fd8dfff5e551c7c3f210c7793489094d39"} Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.670237 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.682240 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" podStartSLOduration=9.141692506 podStartE2EDuration="27.68222375s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.53848666 +0000 UTC m=+1165.054284247" lastFinishedPulling="2026-02-27 17:13:05.079017904 +0000 UTC m=+1183.594815491" observedRunningTime="2026-02-27 17:13:12.681015107 +0000 UTC m=+1191.196812694" watchObservedRunningTime="2026-02-27 17:13:12.68222375 +0000 UTC m=+1191.198021337" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.718387 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" podStartSLOduration=3.756069401 podStartE2EDuration="27.718372285s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.252901571 +0000 UTC m=+1165.768699158" lastFinishedPulling="2026-02-27 17:13:11.215204425 +0000 UTC m=+1189.731002042" observedRunningTime="2026-02-27 17:13:12.717373027 +0000 UTC m=+1191.233170614" watchObservedRunningTime="2026-02-27 17:13:12.718372285 +0000 UTC m=+1191.234169872" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.774803 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" podStartSLOduration=9.162093912 podStartE2EDuration="27.774788587s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.05882308 +0000 UTC m=+1165.574620667" lastFinishedPulling="2026-02-27 17:13:05.671517755 +0000 UTC m=+1184.187315342" observedRunningTime="2026-02-27 17:13:12.769637793 +0000 UTC m=+1191.285435380" watchObservedRunningTime="2026-02-27 17:13:12.774788587 +0000 UTC m=+1191.290586164" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.783709 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" podStartSLOduration=3.955979479 podStartE2EDuration="27.783694127s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.253167119 +0000 UTC m=+1165.768964706" lastFinishedPulling="2026-02-27 17:13:11.080881757 +0000 UTC m=+1189.596679354" observedRunningTime="2026-02-27 17:13:12.743231412 +0000 UTC m=+1191.259028999" watchObservedRunningTime="2026-02-27 17:13:12.783694127 +0000 UTC m=+1191.299491714" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.786023 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" podStartSLOduration=7.773803962 podStartE2EDuration="27.786009682s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.875966827 +0000 UTC m=+1165.391764404" lastFinishedPulling="2026-02-27 17:13:06.888172527 +0000 UTC m=+1185.403970124" observedRunningTime="2026-02-27 17:13:12.78522889 +0000 UTC m=+1191.301026477" watchObservedRunningTime="2026-02-27 17:13:12.786009682 +0000 UTC m=+1191.301807259" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.816978 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" podStartSLOduration=9.274932054 podStartE2EDuration="27.81695391s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.538281824 +0000 UTC m=+1165.054079411" lastFinishedPulling="2026-02-27 17:13:05.08030368 +0000 UTC m=+1183.596101267" observedRunningTime="2026-02-27 17:13:12.801217649 +0000 UTC m=+1191.317015236" watchObservedRunningTime="2026-02-27 17:13:12.81695391 +0000 UTC m=+1191.332751497" Feb 27 17:13:12 crc kubenswrapper[4708]: I0227 17:13:12.839031 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" podStartSLOduration=4.293543652 podStartE2EDuration="27.839016319s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.059088348 +0000 UTC m=+1165.574885935" lastFinishedPulling="2026-02-27 17:13:10.604560975 +0000 UTC m=+1189.120358602" observedRunningTime="2026-02-27 17:13:12.83512058 +0000 UTC m=+1191.350918167" watchObservedRunningTime="2026-02-27 17:13:12.839016319 +0000 UTC m=+1191.354813906" Feb 27 17:13:16 crc kubenswrapper[4708]: I0227 17:13:16.077968 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-648564c9fc-vq95w" Feb 27 17:13:16 crc kubenswrapper[4708]: I0227 17:13:16.166548 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-5c646dc97-69twh" Feb 27 17:13:16 crc kubenswrapper[4708]: I0227 17:13:16.352447 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-55b5ff4dbb-jfh6m" Feb 27 17:13:16 crc kubenswrapper[4708]: I0227 17:13:16.366291 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-bccc79885-sjbv4" Feb 27 17:13:17 crc kubenswrapper[4708]: I0227 17:13:17.984222 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:17 crc kubenswrapper[4708]: I0227 17:13:17.996628 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/8e7ab31e-da8a-4ae8-a4c1-940312416cc3-webhook-certs\") pod \"openstack-operator-controller-manager-b89df8bf4-c7qtl\" (UID: \"8e7ab31e-da8a-4ae8-a4c1-940312416cc3\") " pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.180366 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.699936 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl"] Feb 27 17:13:18 crc kubenswrapper[4708]: W0227 17:13:18.707065 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e7ab31e_da8a_4ae8_a4c1_940312416cc3.slice/crio-a52a29666335b5830fffc085c04abd706bca2c8cffca8a53275c912bc38fd0ee WatchSource:0}: Error finding container a52a29666335b5830fffc085c04abd706bca2c8cffca8a53275c912bc38fd0ee: Status 404 returned error can't find the container with id a52a29666335b5830fffc085c04abd706bca2c8cffca8a53275c912bc38fd0ee Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.737433 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" event={"ID":"dde28522-3138-4c50-b3c5-1e26d61b96e1","Type":"ContainerStarted","Data":"d85114297b3a31bfcd55cb2994d8a5966f11c840201661c9aa1e571075683eb6"} Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.737517 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.738661 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" event={"ID":"f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5","Type":"ContainerStarted","Data":"fe658bd3af2921363fce855c34f393b830e22d27663df088a7cad90fcf22f329"} Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.738898 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.740701 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" event={"ID":"45efdeea-5e44-44b0-b9d0-e2cc8c441168","Type":"ContainerStarted","Data":"cec6eabc9fbd8b6c8be016b91e37a6da83bf18bfa0a0e4cca98486be19552907"} Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.741171 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.742348 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" event={"ID":"5cb187f0-85c4-48ef-90fb-6a6c896188e5","Type":"ContainerStarted","Data":"8461423bf35b04c0e1e28414473ca26ac76a0184b52b60c3641268a62378f2f8"} Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.742668 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.751326 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" event={"ID":"156803c8-e795-452c-9244-b93c2b3af9e7","Type":"ContainerStarted","Data":"9391dde7d77b037f1990f3451de8085fa08ba9f765287805bdea44ffa95bb71c"} Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.752103 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.759550 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" event={"ID":"8e7ab31e-da8a-4ae8-a4c1-940312416cc3","Type":"ContainerStarted","Data":"a52a29666335b5830fffc085c04abd706bca2c8cffca8a53275c912bc38fd0ee"} Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.763186 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" podStartSLOduration=27.441182168 podStartE2EDuration="33.763175295s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:13:11.169201514 +0000 UTC m=+1189.684999101" lastFinishedPulling="2026-02-27 17:13:17.491194611 +0000 UTC m=+1196.006992228" observedRunningTime="2026-02-27 17:13:18.762230598 +0000 UTC m=+1197.278028185" watchObservedRunningTime="2026-02-27 17:13:18.763175295 +0000 UTC m=+1197.278972882" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.768558 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" event={"ID":"c0bf6b0d-d70d-4498-a61f-cd7354439357","Type":"ContainerStarted","Data":"be0033b9a0158cc30426cdb39f313a17172b323f4a8db2de067330a28ad00e85"} Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.768904 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.799174 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" podStartSLOduration=3.047489667 podStartE2EDuration="33.799150064s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.725802099 +0000 UTC m=+1165.241599686" lastFinishedPulling="2026-02-27 17:13:17.477462456 +0000 UTC m=+1195.993260083" observedRunningTime="2026-02-27 17:13:18.792821066 +0000 UTC m=+1197.308618653" watchObservedRunningTime="2026-02-27 17:13:18.799150064 +0000 UTC m=+1197.314947651" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.813762 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" podStartSLOduration=2.873952868 podStartE2EDuration="33.813739363s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.851567937 +0000 UTC m=+1165.367365524" lastFinishedPulling="2026-02-27 17:13:17.791354422 +0000 UTC m=+1196.307152019" observedRunningTime="2026-02-27 17:13:18.808010872 +0000 UTC m=+1197.323808459" watchObservedRunningTime="2026-02-27 17:13:18.813739363 +0000 UTC m=+1197.329536940" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.830446 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" podStartSLOduration=3.401443464 podStartE2EDuration="33.830422601s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.223763807 +0000 UTC m=+1165.739561394" lastFinishedPulling="2026-02-27 17:13:17.652742944 +0000 UTC m=+1196.168540531" observedRunningTime="2026-02-27 17:13:18.822496109 +0000 UTC m=+1197.338293696" watchObservedRunningTime="2026-02-27 17:13:18.830422601 +0000 UTC m=+1197.346220198" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.861657 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" podStartSLOduration=3.028477886 podStartE2EDuration="33.861620456s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.952381589 +0000 UTC m=+1165.468179176" lastFinishedPulling="2026-02-27 17:13:17.785524119 +0000 UTC m=+1196.301321746" observedRunningTime="2026-02-27 17:13:18.848820757 +0000 UTC m=+1197.364618344" watchObservedRunningTime="2026-02-27 17:13:18.861620456 +0000 UTC m=+1197.377418043" Feb 27 17:13:18 crc kubenswrapper[4708]: I0227 17:13:18.869449 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" podStartSLOduration=26.849807688 podStartE2EDuration="33.869419705s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:13:10.455362509 +0000 UTC m=+1188.971160096" lastFinishedPulling="2026-02-27 17:13:17.474974486 +0000 UTC m=+1195.990772113" observedRunningTime="2026-02-27 17:13:18.868321274 +0000 UTC m=+1197.384118851" watchObservedRunningTime="2026-02-27 17:13:18.869419705 +0000 UTC m=+1197.385217292" Feb 27 17:13:19 crc kubenswrapper[4708]: I0227 17:13:19.778245 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" event={"ID":"038010da-affb-4db1-88e9-67e8ee1304cc","Type":"ContainerStarted","Data":"b6cc16f9baffe67fb7426945d3ed51b1a846948ce97a71ee9027aa5430072377"} Feb 27 17:13:19 crc kubenswrapper[4708]: I0227 17:13:19.779938 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" Feb 27 17:13:19 crc kubenswrapper[4708]: I0227 17:13:19.780260 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" event={"ID":"8e7ab31e-da8a-4ae8-a4c1-940312416cc3","Type":"ContainerStarted","Data":"07058d23cdf2bd93b95a5495d7076aecc0a435902cdfff8ccfa74f064ee6ce81"} Feb 27 17:13:19 crc kubenswrapper[4708]: I0227 17:13:19.803493 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" podStartSLOduration=2.251233129 podStartE2EDuration="34.803476549s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.385821611 +0000 UTC m=+1164.901619198" lastFinishedPulling="2026-02-27 17:13:18.938065031 +0000 UTC m=+1197.453862618" observedRunningTime="2026-02-27 17:13:19.798338535 +0000 UTC m=+1198.314136142" watchObservedRunningTime="2026-02-27 17:13:19.803476549 +0000 UTC m=+1198.319274156" Feb 27 17:13:19 crc kubenswrapper[4708]: I0227 17:13:19.856161 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" podStartSLOduration=34.856144087 podStartE2EDuration="34.856144087s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:13:19.851757504 +0000 UTC m=+1198.367555111" watchObservedRunningTime="2026-02-27 17:13:19.856144087 +0000 UTC m=+1198.371941674" Feb 27 17:13:20 crc kubenswrapper[4708]: I0227 17:13:20.789716 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.574926 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6db6876945-2hw5z" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.575663 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-55d77d7b5c-4kwbb" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.586661 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-5d87c9d997-wffwh" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.618255 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-cf99c678f-c4pj6" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.630158 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-64db6967f8-5nrb9" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.653262 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-78bc7f9bd9-wv64j" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.850479 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-545456dc4-wp777" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.871644 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-67d996989d-kj8hq" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.945956 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-556b8b874-mcvwl" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.956671 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-54688575f-d29bm" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.970319 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-74b6b5dc96-vcjxj" Feb 27 17:13:25 crc kubenswrapper[4708]: I0227 17:13:25.972182 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-5d86c7ddb7-dqxzg" Feb 27 17:13:26 crc kubenswrapper[4708]: I0227 17:13:26.054123 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-75684d597f-rj2g4" Feb 27 17:13:28 crc kubenswrapper[4708]: I0227 17:13:28.189694 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-b89df8bf4-c7qtl" Feb 27 17:13:31 crc kubenswrapper[4708]: I0227 17:13:31.288544 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" Feb 27 17:13:31 crc kubenswrapper[4708]: I0227 17:13:31.944746 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2" Feb 27 17:13:35 crc kubenswrapper[4708]: I0227 17:13:35.932480 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" event={"ID":"ae129c1e-ae9f-4cef-93fd-b186bf0eb275","Type":"ContainerStarted","Data":"a70c420274666f1df8179cd6d2da43b1582bff44ee219c064ef1b89e47721ce3"} Feb 27 17:13:35 crc kubenswrapper[4708]: I0227 17:13:35.933306 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" Feb 27 17:13:35 crc kubenswrapper[4708]: I0227 17:13:35.934540 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" event={"ID":"f2e64742-9a09-4f5a-b8d5-ec938e7ac27b","Type":"ContainerStarted","Data":"7ed687b6e1d687d0a1991ba5b8747b61b0e3f6a5b2161f53373f84ae6de19dbe"} Feb 27 17:13:35 crc kubenswrapper[4708]: I0227 17:13:35.934968 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" Feb 27 17:13:35 crc kubenswrapper[4708]: I0227 17:13:35.977822 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" podStartSLOduration=2.902046674 podStartE2EDuration="50.977796806s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:46.7187734 +0000 UTC m=+1165.234570987" lastFinishedPulling="2026-02-27 17:13:34.794523502 +0000 UTC m=+1213.310321119" observedRunningTime="2026-02-27 17:13:35.975281666 +0000 UTC m=+1214.491079263" watchObservedRunningTime="2026-02-27 17:13:35.977796806 +0000 UTC m=+1214.493594433" Feb 27 17:13:35 crc kubenswrapper[4708]: I0227 17:13:35.984006 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" podStartSLOduration=3.235994231 podStartE2EDuration="50.98399223s" podCreationTimestamp="2026-02-27 17:12:45 +0000 UTC" firstStartedPulling="2026-02-27 17:12:47.046389849 +0000 UTC m=+1165.562187436" lastFinishedPulling="2026-02-27 17:13:34.794387818 +0000 UTC m=+1213.310185435" observedRunningTime="2026-02-27 17:13:35.958079053 +0000 UTC m=+1214.473876650" watchObservedRunningTime="2026-02-27 17:13:35.98399223 +0000 UTC m=+1214.499789857" Feb 27 17:13:45 crc kubenswrapper[4708]: I0227 17:13:45.787220 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-55ffd4876b-n66z2" Feb 27 17:13:46 crc kubenswrapper[4708]: I0227 17:13:46.088390 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-9b9ff9f4d-8jdst" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.150493 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536874-bdrv8"] Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.153186 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-bdrv8" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.156576 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.156665 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.157398 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.160474 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-bdrv8"] Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.310668 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdc96\" (UniqueName: \"kubernetes.io/projected/87947d39-2a62-41d6-836f-b385d2b3ae28-kube-api-access-tdc96\") pod \"auto-csr-approver-29536874-bdrv8\" (UID: \"87947d39-2a62-41d6-836f-b385d2b3ae28\") " pod="openshift-infra/auto-csr-approver-29536874-bdrv8" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.413140 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdc96\" (UniqueName: \"kubernetes.io/projected/87947d39-2a62-41d6-836f-b385d2b3ae28-kube-api-access-tdc96\") pod \"auto-csr-approver-29536874-bdrv8\" (UID: \"87947d39-2a62-41d6-836f-b385d2b3ae28\") " pod="openshift-infra/auto-csr-approver-29536874-bdrv8" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.457296 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdc96\" (UniqueName: \"kubernetes.io/projected/87947d39-2a62-41d6-836f-b385d2b3ae28-kube-api-access-tdc96\") pod \"auto-csr-approver-29536874-bdrv8\" (UID: \"87947d39-2a62-41d6-836f-b385d2b3ae28\") " pod="openshift-infra/auto-csr-approver-29536874-bdrv8" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.485938 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-bdrv8" Feb 27 17:14:00 crc kubenswrapper[4708]: I0227 17:14:00.788154 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-bdrv8"] Feb 27 17:14:01 crc kubenswrapper[4708]: I0227 17:14:01.217231 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536874-bdrv8" event={"ID":"87947d39-2a62-41d6-836f-b385d2b3ae28","Type":"ContainerStarted","Data":"75ce677ce3541eecf518a3741ba8abe5ae3690aab11a8693329fb458ab6e67e2"} Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.773648 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58hf7"] Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.775250 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.778434 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.778513 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.778658 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-zc8mz" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.782245 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.782444 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58hf7"] Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.845746 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x9qft"] Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.847612 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.849475 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.863046 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x9qft"] Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.969501 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-config\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.969573 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-config\") pod \"dnsmasq-dns-675f4bcbfc-58hf7\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.969605 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.969745 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xz5qc\" (UniqueName: \"kubernetes.io/projected/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-kube-api-access-xz5qc\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:02 crc kubenswrapper[4708]: I0227 17:14:02.969994 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cfc4l\" (UniqueName: \"kubernetes.io/projected/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-kube-api-access-cfc4l\") pod \"dnsmasq-dns-675f4bcbfc-58hf7\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.071504 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.071578 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xz5qc\" (UniqueName: \"kubernetes.io/projected/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-kube-api-access-xz5qc\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.072679 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.073181 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cfc4l\" (UniqueName: \"kubernetes.io/projected/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-kube-api-access-cfc4l\") pod \"dnsmasq-dns-675f4bcbfc-58hf7\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.073251 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-config\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.073276 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-config\") pod \"dnsmasq-dns-675f4bcbfc-58hf7\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.073933 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-config\") pod \"dnsmasq-dns-675f4bcbfc-58hf7\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.074798 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-config\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.094539 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cfc4l\" (UniqueName: \"kubernetes.io/projected/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-kube-api-access-cfc4l\") pod \"dnsmasq-dns-675f4bcbfc-58hf7\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.109157 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xz5qc\" (UniqueName: \"kubernetes.io/projected/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-kube-api-access-xz5qc\") pod \"dnsmasq-dns-78dd6ddcc-x9qft\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.166490 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.235745 4708 generic.go:334] "Generic (PLEG): container finished" podID="87947d39-2a62-41d6-836f-b385d2b3ae28" containerID="7ba36d4b083743d4413d8168aa6f629b8004e385f94b162439c2a26d6d87c5d8" exitCode=0 Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.235790 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536874-bdrv8" event={"ID":"87947d39-2a62-41d6-836f-b385d2b3ae28","Type":"ContainerDied","Data":"7ba36d4b083743d4413d8168aa6f629b8004e385f94b162439c2a26d6d87c5d8"} Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.390468 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.665014 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x9qft"] Feb 27 17:14:03 crc kubenswrapper[4708]: W0227 17:14:03.706114 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9192eb25_4187_4e0e_87ed_c98c9c6f7fdb.slice/crio-c197358944d140444e56b578fa7ee69bb6b1d37d2c3e7699c45f10969a784889 WatchSource:0}: Error finding container c197358944d140444e56b578fa7ee69bb6b1d37d2c3e7699c45f10969a784889: Status 404 returned error can't find the container with id c197358944d140444e56b578fa7ee69bb6b1d37d2c3e7699c45f10969a784889 Feb 27 17:14:03 crc kubenswrapper[4708]: I0227 17:14:03.755401 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58hf7"] Feb 27 17:14:03 crc kubenswrapper[4708]: W0227 17:14:03.766070 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podad262df7_116e_4dd5_9bc4_1e1bf9ee66bf.slice/crio-2f998b2f8b21cfd2223a58dbd994517a51815c59b3232e5bee790e273602dc3c WatchSource:0}: Error finding container 2f998b2f8b21cfd2223a58dbd994517a51815c59b3232e5bee790e273602dc3c: Status 404 returned error can't find the container with id 2f998b2f8b21cfd2223a58dbd994517a51815c59b3232e5bee790e273602dc3c Feb 27 17:14:04 crc kubenswrapper[4708]: I0227 17:14:04.244665 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" event={"ID":"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf","Type":"ContainerStarted","Data":"2f998b2f8b21cfd2223a58dbd994517a51815c59b3232e5bee790e273602dc3c"} Feb 27 17:14:04 crc kubenswrapper[4708]: I0227 17:14:04.245862 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" event={"ID":"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb","Type":"ContainerStarted","Data":"c197358944d140444e56b578fa7ee69bb6b1d37d2c3e7699c45f10969a784889"} Feb 27 17:14:04 crc kubenswrapper[4708]: I0227 17:14:04.582274 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-bdrv8" Feb 27 17:14:04 crc kubenswrapper[4708]: I0227 17:14:04.708173 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tdc96\" (UniqueName: \"kubernetes.io/projected/87947d39-2a62-41d6-836f-b385d2b3ae28-kube-api-access-tdc96\") pod \"87947d39-2a62-41d6-836f-b385d2b3ae28\" (UID: \"87947d39-2a62-41d6-836f-b385d2b3ae28\") " Feb 27 17:14:04 crc kubenswrapper[4708]: I0227 17:14:04.726067 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87947d39-2a62-41d6-836f-b385d2b3ae28-kube-api-access-tdc96" (OuterVolumeSpecName: "kube-api-access-tdc96") pod "87947d39-2a62-41d6-836f-b385d2b3ae28" (UID: "87947d39-2a62-41d6-836f-b385d2b3ae28"). InnerVolumeSpecName "kube-api-access-tdc96". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:14:04 crc kubenswrapper[4708]: I0227 17:14:04.810466 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tdc96\" (UniqueName: \"kubernetes.io/projected/87947d39-2a62-41d6-836f-b385d2b3ae28-kube-api-access-tdc96\") on node \"crc\" DevicePath \"\"" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.260287 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536874-bdrv8" event={"ID":"87947d39-2a62-41d6-836f-b385d2b3ae28","Type":"ContainerDied","Data":"75ce677ce3541eecf518a3741ba8abe5ae3690aab11a8693329fb458ab6e67e2"} Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.260339 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75ce677ce3541eecf518a3741ba8abe5ae3690aab11a8693329fb458ab6e67e2" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.260399 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536874-bdrv8" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.506831 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58hf7"] Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.550093 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g6vsk"] Feb 27 17:14:05 crc kubenswrapper[4708]: E0227 17:14:05.550436 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87947d39-2a62-41d6-836f-b385d2b3ae28" containerName="oc" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.550453 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="87947d39-2a62-41d6-836f-b385d2b3ae28" containerName="oc" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.550632 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="87947d39-2a62-41d6-836f-b385d2b3ae28" containerName="oc" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.551482 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.568901 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g6vsk"] Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.647416 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f59t\" (UniqueName: \"kubernetes.io/projected/0e525119-22e4-4879-bec2-b7d830c00fcf-kube-api-access-4f59t\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.647485 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.647514 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-config\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.655816 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-vzmzz"] Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.661276 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536868-vzmzz"] Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.764433 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.764481 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-config\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.765656 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-dns-svc\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.765801 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-config\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.764625 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f59t\" (UniqueName: \"kubernetes.io/projected/0e525119-22e4-4879-bec2-b7d830c00fcf-kube-api-access-4f59t\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.782408 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x9qft"] Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.789681 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f59t\" (UniqueName: \"kubernetes.io/projected/0e525119-22e4-4879-bec2-b7d830c00fcf-kube-api-access-4f59t\") pod \"dnsmasq-dns-666b6646f7-g6vsk\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.797300 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dpsz"] Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.800612 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.868788 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-config\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.868937 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55nfg\" (UniqueName: \"kubernetes.io/projected/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-kube-api-access-55nfg\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.869002 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.876597 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.894185 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dpsz"] Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.970645 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-config\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.970740 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55nfg\" (UniqueName: \"kubernetes.io/projected/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-kube-api-access-55nfg\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.970785 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.971523 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:05 crc kubenswrapper[4708]: I0227 17:14:05.971709 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-config\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.020592 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55nfg\" (UniqueName: \"kubernetes.io/projected/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-kube-api-access-55nfg\") pod \"dnsmasq-dns-57d769cc4f-2dpsz\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.124734 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.302075 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ec21f8e-82e7-4d31-bc5e-906388eef4e0" path="/var/lib/kubelet/pods/6ec21f8e-82e7-4d31-bc5e-906388eef4e0/volumes" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.398724 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dpsz"] Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.476905 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g6vsk"] Feb 27 17:14:06 crc kubenswrapper[4708]: W0227 17:14:06.486054 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e525119_22e4_4879_bec2_b7d830c00fcf.slice/crio-c6511cc7bf105090bd144c9d3c654a314b5179a76cb7bc9699d3c9615f375b2a WatchSource:0}: Error finding container c6511cc7bf105090bd144c9d3c654a314b5179a76cb7bc9699d3c9615f375b2a: Status 404 returned error can't find the container with id c6511cc7bf105090bd144c9d3c654a314b5179a76cb7bc9699d3c9615f375b2a Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.678118 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.681829 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.685642 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.685988 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.686122 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.686274 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zgrwx" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.686328 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.686391 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.687963 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.713601 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882297 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4tq2z\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-kube-api-access-4tq2z\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882539 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882565 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882602 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-server-conf\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882629 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882647 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32b89444-fadf-43c8-b552-e5071fc91481-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882671 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32b89444-fadf-43c8-b552-e5071fc91481-pod-info\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882835 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882876 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.882962 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.883052 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-config-data\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984484 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-config-data\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984543 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4tq2z\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-kube-api-access-4tq2z\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984565 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984588 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984621 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-server-conf\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984647 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984664 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32b89444-fadf-43c8-b552-e5071fc91481-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984686 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32b89444-fadf-43c8-b552-e5071fc91481-pod-info\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984719 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984742 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.984763 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.985251 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.985894 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-config-data\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.986004 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.987749 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-server-conf\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.988684 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.988691 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.988757 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ef34f33f9707a5269ee06d9790943c794b4a35585830a0fadfbdb657babc33a0/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.991790 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.992507 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.993483 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:14:06 crc kubenswrapper[4708]: I0227 17:14:06.995406 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32b89444-fadf-43c8-b552-e5071fc91481-pod-info\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.000223 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.002925 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4tq2z\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-kube-api-access-4tq2z\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.003809 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.004072 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.004104 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.004146 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.004266 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9zx8k" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.004372 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.005564 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.006681 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.016117 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32b89444-fadf-43c8-b552-e5071fc91481-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.049526 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " pod="openstack/rabbitmq-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188592 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188641 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188666 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb2fe191-cb57-46a6-9797-c9890640ff74-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188688 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188705 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188728 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188750 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rht9n\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-kube-api-access-rht9n\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188801 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188820 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb2fe191-cb57-46a6-9797-c9890640ff74-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188855 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.188874 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290607 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290663 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb2fe191-cb57-46a6-9797-c9890640ff74-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290685 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290708 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290746 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290789 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290809 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb2fe191-cb57-46a6-9797-c9890640ff74-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290831 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290927 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290951 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.290975 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rht9n\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-kube-api-access-rht9n\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.293654 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.293676 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9f6ff909c36baed36fbb0de76c440cc5ed218f0a068c651800017aff83661890/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.294583 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.307438 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.311834 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.312003 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.312448 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb2fe191-cb57-46a6-9797-c9890640ff74-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.314190 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.317564 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.318012 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.320245 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.323207 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rht9n\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-kube-api-access-rht9n\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.332431 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb2fe191-cb57-46a6-9797-c9890640ff74-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.355138 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" event={"ID":"0e525119-22e4-4879-bec2-b7d830c00fcf","Type":"ContainerStarted","Data":"c6511cc7bf105090bd144c9d3c654a314b5179a76cb7bc9699d3c9615f375b2a"} Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.356758 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" event={"ID":"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d","Type":"ContainerStarted","Data":"2c21e96b85589ef33cf0fb691be4b9f52446404f9ec3cd30581ccbffbbee3648"} Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.386887 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.670877 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:14:07 crc kubenswrapper[4708]: I0227 17:14:07.881422 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.188398 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:14:08 crc kubenswrapper[4708]: W0227 17:14:08.195912 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb2fe191_cb57_46a6_9797_c9890640ff74.slice/crio-7e7febe31afdc5a8f3d4c0db2807844a2964205eec12a027c4aadf41c269de15 WatchSource:0}: Error finding container 7e7febe31afdc5a8f3d4c0db2807844a2964205eec12a027c4aadf41c269de15: Status 404 returned error can't find the container with id 7e7febe31afdc5a8f3d4c0db2807844a2964205eec12a027c4aadf41c269de15 Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.352925 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.358074 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.367807 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.371420 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.373255 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sqjvd" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.373746 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.374051 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.377572 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eb2fe191-cb57-46a6-9797-c9890640ff74","Type":"ContainerStarted","Data":"7e7febe31afdc5a8f3d4c0db2807844a2964205eec12a027c4aadf41c269de15"} Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.378546 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.381439 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32b89444-fadf-43c8-b552-e5071fc91481","Type":"ContainerStarted","Data":"6edf45486a2fdec51bf1be93b35309afd6be0bacf3d91179bfc684e86c59caa6"} Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.521883 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-config-data-default\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.521953 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.521999 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.522070 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.522119 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.522139 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.522220 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-kolla-config\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.522328 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gk67\" (UniqueName: \"kubernetes.io/projected/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-kube-api-access-5gk67\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.623986 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.624033 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.624052 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.624075 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.624107 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-kolla-config\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.624178 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gk67\" (UniqueName: \"kubernetes.io/projected/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-kube-api-access-5gk67\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.624232 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-config-data-default\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.626136 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-kolla-config\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.624248 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.626777 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-config-data-generated\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.626964 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-config-data-default\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.626971 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-operator-scripts\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.633735 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.636676 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.645752 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gk67\" (UniqueName: \"kubernetes.io/projected/6f6f6892-d9d6-4f71-bc65-8e47c15bddc1-kube-api-access-5gk67\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.654651 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.654687 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c88dad72e1022c211660d236aa67fa6016db61d52e7d1c41553973e8ce5e946e/globalmount\"" pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.705450 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-d9f37ad7-c2d4-42a7-9f04-4c98e641d863\") pod \"openstack-galera-0\" (UID: \"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1\") " pod="openstack/openstack-galera-0" Feb 27 17:14:08 crc kubenswrapper[4708]: I0227 17:14:08.981066 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.747082 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.748597 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.749026 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.751223 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.751465 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.752317 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7h68d" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.752438 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.868925 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.868996 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.869117 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.869149 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.869191 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg7tm\" (UniqueName: \"kubernetes.io/projected/4c3332de-a21c-4552-a037-c5665b4c0927-kube-api-access-mg7tm\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.869223 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c3332de-a21c-4552-a037-c5665b4c0927-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.869243 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c3332de-a21c-4552-a037-c5665b4c0927-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.869266 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3332de-a21c-4552-a037-c5665b4c0927-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.941516 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.942679 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.951629 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.951819 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-qmr8b" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.952058 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.953372 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971639 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c3332de-a21c-4552-a037-c5665b4c0927-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971702 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c3332de-a21c-4552-a037-c5665b4c0927-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971735 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3332de-a21c-4552-a037-c5665b4c0927-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971790 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971820 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971899 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971936 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.971967 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mg7tm\" (UniqueName: \"kubernetes.io/projected/4c3332de-a21c-4552-a037-c5665b4c0927-kube-api-access-mg7tm\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.972088 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/4c3332de-a21c-4552-a037-c5665b4c0927-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.972899 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.973624 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.975492 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.975524 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/34a03d64a12a33c61ffdfebe1abda1db700f3284ea36dd4a104b5d4d94e1151c/globalmount\"" pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.977008 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c3332de-a21c-4552-a037-c5665b4c0927-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.993401 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4c3332de-a21c-4552-a037-c5665b4c0927-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:09 crc kubenswrapper[4708]: I0227 17:14:09.998509 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mg7tm\" (UniqueName: \"kubernetes.io/projected/4c3332de-a21c-4552-a037-c5665b4c0927-kube-api-access-mg7tm\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.000463 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/4c3332de-a21c-4552-a037-c5665b4c0927-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.031341 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-8d500279-3399-4ea3-b8b5-ee20a689a47f\") pod \"openstack-cell1-galera-0\" (UID: \"4c3332de-a21c-4552-a037-c5665b4c0927\") " pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.049124 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.073356 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ddvjt\" (UniqueName: \"kubernetes.io/projected/0c436943-14ee-474c-a393-c067fd0dec97-kube-api-access-ddvjt\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.073458 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c436943-14ee-474c-a393-c067fd0dec97-kolla-config\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.073605 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c436943-14ee-474c-a393-c067fd0dec97-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.073655 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c436943-14ee-474c-a393-c067fd0dec97-config-data\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.073796 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c436943-14ee-474c-a393-c067fd0dec97-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.092335 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.176297 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c436943-14ee-474c-a393-c067fd0dec97-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.176663 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ddvjt\" (UniqueName: \"kubernetes.io/projected/0c436943-14ee-474c-a393-c067fd0dec97-kube-api-access-ddvjt\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.176725 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c436943-14ee-474c-a393-c067fd0dec97-kolla-config\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.176777 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c436943-14ee-474c-a393-c067fd0dec97-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.176806 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c436943-14ee-474c-a393-c067fd0dec97-config-data\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.178263 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/0c436943-14ee-474c-a393-c067fd0dec97-kolla-config\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.178968 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/0c436943-14ee-474c-a393-c067fd0dec97-config-data\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.182774 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/0c436943-14ee-474c-a393-c067fd0dec97-memcached-tls-certs\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.183398 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0c436943-14ee-474c-a393-c067fd0dec97-combined-ca-bundle\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.199435 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ddvjt\" (UniqueName: \"kubernetes.io/projected/0c436943-14ee-474c-a393-c067fd0dec97-kube-api-access-ddvjt\") pod \"memcached-0\" (UID: \"0c436943-14ee-474c-a393-c067fd0dec97\") " pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.266665 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.586070 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.723414 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1","Type":"ContainerStarted","Data":"7c2c4a94a672447bf696049c0d7ab3cb4d68eba0cbbc8010cca34395c637f25f"} Feb 27 17:14:10 crc kubenswrapper[4708]: I0227 17:14:10.792119 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Feb 27 17:14:11 crc kubenswrapper[4708]: I0227 17:14:11.991884 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:14:11 crc kubenswrapper[4708]: I0227 17:14:11.992979 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 17:14:11 crc kubenswrapper[4708]: I0227 17:14:11.995470 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-mncz7" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.014514 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.042294 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dczss\" (UniqueName: \"kubernetes.io/projected/83739fce-8870-491c-844b-9674e73b937a-kube-api-access-dczss\") pod \"kube-state-metrics-0\" (UID: \"83739fce-8870-491c-844b-9674e73b937a\") " pod="openstack/kube-state-metrics-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.143696 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dczss\" (UniqueName: \"kubernetes.io/projected/83739fce-8870-491c-844b-9674e73b937a-kube-api-access-dczss\") pod \"kube-state-metrics-0\" (UID: \"83739fce-8870-491c-844b-9674e73b937a\") " pod="openstack/kube-state-metrics-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.178324 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dczss\" (UniqueName: \"kubernetes.io/projected/83739fce-8870-491c-844b-9674e73b937a-kube-api-access-dczss\") pod \"kube-state-metrics-0\" (UID: \"83739fce-8870-491c-844b-9674e73b937a\") " pod="openstack/kube-state-metrics-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.318298 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.608529 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.610825 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.613068 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-web-config" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.613544 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-cluster-tls-config" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.614214 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-alertmanager-dockercfg-gxkl5" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.614324 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-tls-assets-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.614224 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"alertmanager-metric-storage-generated" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.624053 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.751933 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql6mh\" (UniqueName: \"kubernetes.io/projected/6cc07076-e637-443a-85c1-7b72beeb6cc7-kube-api-access-ql6mh\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.751997 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cc07076-e637-443a-85c1-7b72beeb6cc7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.752048 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cc07076-e637-443a-85c1-7b72beeb6cc7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.752076 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.752108 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.752203 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/6cc07076-e637-443a-85c1-7b72beeb6cc7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.752299 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.853933 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ql6mh\" (UniqueName: \"kubernetes.io/projected/6cc07076-e637-443a-85c1-7b72beeb6cc7-kube-api-access-ql6mh\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.853992 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cc07076-e637-443a-85c1-7b72beeb6cc7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.854071 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cc07076-e637-443a-85c1-7b72beeb6cc7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.854098 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.854124 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.854148 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/6cc07076-e637-443a-85c1-7b72beeb6cc7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.854178 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.857949 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-metric-storage-db\" (UniqueName: \"kubernetes.io/empty-dir/6cc07076-e637-443a-85c1-7b72beeb6cc7-alertmanager-metric-storage-db\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.859838 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-web-config\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.860045 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6cc07076-e637-443a-85c1-7b72beeb6cc7-tls-assets\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.862384 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6cc07076-e637-443a-85c1-7b72beeb6cc7-config-out\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.862802 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-cluster-tls-config\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.872306 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql6mh\" (UniqueName: \"kubernetes.io/projected/6cc07076-e637-443a-85c1-7b72beeb6cc7-kube-api-access-ql6mh\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.874420 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/6cc07076-e637-443a-85c1-7b72beeb6cc7-config-volume\") pod \"alertmanager-metric-storage-0\" (UID: \"6cc07076-e637-443a-85c1-7b72beeb6cc7\") " pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:12 crc kubenswrapper[4708]: I0227 17:14:12.945723 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/alertmanager-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.287891 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.289904 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.291475 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-j5w7w" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.291780 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.292408 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.294401 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.294606 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.294714 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.294815 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.294948 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.300627 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.463929 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.463995 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464024 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464051 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464217 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c129cc00-13ca-4502-aa1b-866133b164a9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464291 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464358 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464465 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464520 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-config\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.464549 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jsv9\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-kube-api-access-8jsv9\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.565787 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-config\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.565899 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jsv9\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-kube-api-access-8jsv9\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566257 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566389 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566437 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566464 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566512 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c129cc00-13ca-4502-aa1b-866133b164a9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566535 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566582 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.566625 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.567383 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.571591 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.571633 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.571726 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.571774 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.572136 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-config\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.582509 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c129cc00-13ca-4502-aa1b-866133b164a9-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.583295 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.584260 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.584298 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a3d85c1c1fcaae45da21a4ce37501d7d698227fff3b451bbf342800bd1947c3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.584920 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jsv9\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-kube-api-access-8jsv9\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.622268 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:13 crc kubenswrapper[4708]: I0227 17:14:13.635077 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.356200 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6zlsq"] Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.359531 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.371660 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6zlsq"] Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.372552 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-rtc5l" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.372963 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.373760 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.409704 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-k2qzb"] Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.411475 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.417568 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k2qzb"] Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.541765 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-scripts\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542053 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2410b28c-0b9c-4da0-826a-bcbbab63a292-ovn-controller-tls-certs\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542100 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-run\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542120 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-etc-ovs\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542153 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-log-ovn\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542173 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2410b28c-0b9c-4da0-826a-bcbbab63a292-combined-ca-bundle\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542186 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-run-ovn\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542200 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwm2r\" (UniqueName: \"kubernetes.io/projected/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-kube-api-access-bwm2r\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542226 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2410b28c-0b9c-4da0-826a-bcbbab63a292-scripts\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542251 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-lib\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542269 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gpwf\" (UniqueName: \"kubernetes.io/projected/2410b28c-0b9c-4da0-826a-bcbbab63a292-kube-api-access-6gpwf\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542298 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-log\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.542316 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-run\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643481 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-scripts\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643545 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2410b28c-0b9c-4da0-826a-bcbbab63a292-ovn-controller-tls-certs\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643572 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-run\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643590 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-etc-ovs\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643624 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-log-ovn\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643645 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2410b28c-0b9c-4da0-826a-bcbbab63a292-combined-ca-bundle\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643661 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-run-ovn\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643680 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwm2r\" (UniqueName: \"kubernetes.io/projected/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-kube-api-access-bwm2r\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643705 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2410b28c-0b9c-4da0-826a-bcbbab63a292-scripts\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643732 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-lib\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643748 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gpwf\" (UniqueName: \"kubernetes.io/projected/2410b28c-0b9c-4da0-826a-bcbbab63a292-kube-api-access-6gpwf\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643773 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-log\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.643789 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-run\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.644146 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-run\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.644167 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-run\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.644630 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-etc-ovs\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.644768 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-log-ovn\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.645191 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/2410b28c-0b9c-4da0-826a-bcbbab63a292-var-run-ovn\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.645463 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-lib\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.645511 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-scripts\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.645551 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-var-log\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.646454 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/2410b28c-0b9c-4da0-826a-bcbbab63a292-scripts\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.651464 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/2410b28c-0b9c-4da0-826a-bcbbab63a292-ovn-controller-tls-certs\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.660530 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2410b28c-0b9c-4da0-826a-bcbbab63a292-combined-ca-bundle\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.666360 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwm2r\" (UniqueName: \"kubernetes.io/projected/cdfec2dc-369d-405a-a7c4-95c4b5a08d8a-kube-api-access-bwm2r\") pod \"ovn-controller-ovs-k2qzb\" (UID: \"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a\") " pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.669097 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gpwf\" (UniqueName: \"kubernetes.io/projected/2410b28c-0b9c-4da0-826a-bcbbab63a292-kube-api-access-6gpwf\") pod \"ovn-controller-6zlsq\" (UID: \"2410b28c-0b9c-4da0-826a-bcbbab63a292\") " pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.688667 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.723625 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.896228 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.897392 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.899472 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.899960 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.900151 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.900556 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.911930 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 17:14:16 crc kubenswrapper[4708]: I0227 17:14:16.912277 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-5wsh7" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.051691 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.051771 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2314c35-5338-4db2-a705-53cbc737f9a1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.051921 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btq8p\" (UniqueName: \"kubernetes.io/projected/e2314c35-5338-4db2-a705-53cbc737f9a1-kube-api-access-btq8p\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.051969 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.052006 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.052035 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2314c35-5338-4db2-a705-53cbc737f9a1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.052083 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2314c35-5338-4db2-a705-53cbc737f9a1-config\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.052111 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.153632 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btq8p\" (UniqueName: \"kubernetes.io/projected/e2314c35-5338-4db2-a705-53cbc737f9a1-kube-api-access-btq8p\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.153739 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.153785 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.153833 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2314c35-5338-4db2-a705-53cbc737f9a1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.153907 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2314c35-5338-4db2-a705-53cbc737f9a1-config\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.153953 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.153985 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.154051 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2314c35-5338-4db2-a705-53cbc737f9a1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.154897 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/e2314c35-5338-4db2-a705-53cbc737f9a1-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.155148 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/e2314c35-5338-4db2-a705-53cbc737f9a1-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.155920 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2314c35-5338-4db2-a705-53cbc737f9a1-config\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.158814 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.160673 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.174534 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.174571 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/c2b0c0899463b02ec4168f2db970ce4a3d564557bb152500f05a8933bb823e47/globalmount\"" pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.184552 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/e2314c35-5338-4db2-a705-53cbc737f9a1-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.186799 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btq8p\" (UniqueName: \"kubernetes.io/projected/e2314c35-5338-4db2-a705-53cbc737f9a1-kube-api-access-btq8p\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.227813 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-e692e770-b12d-4b70-956d-02e1e3905ca5\") pod \"ovsdbserver-nb-0\" (UID: \"e2314c35-5338-4db2-a705-53cbc737f9a1\") " pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:17 crc kubenswrapper[4708]: I0227 17:14:17.523614 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Feb 27 17:14:18 crc kubenswrapper[4708]: W0227 17:14:18.854680 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0c436943_14ee_474c_a393_c067fd0dec97.slice/crio-6d605d7c2211039c0003e26a24a0cf1643bc247e4f22e33f15d8fc62b8de3d23 WatchSource:0}: Error finding container 6d605d7c2211039c0003e26a24a0cf1643bc247e4f22e33f15d8fc62b8de3d23: Status 404 returned error can't find the container with id 6d605d7c2211039c0003e26a24a0cf1643bc247e4f22e33f15d8fc62b8de3d23 Feb 27 17:14:18 crc kubenswrapper[4708]: I0227 17:14:18.870115 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:14:19 crc kubenswrapper[4708]: I0227 17:14:19.824583 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0c436943-14ee-474c-a393-c067fd0dec97","Type":"ContainerStarted","Data":"6d605d7c2211039c0003e26a24a0cf1643bc247e4f22e33f15d8fc62b8de3d23"} Feb 27 17:14:19 crc kubenswrapper[4708]: I0227 17:14:19.826221 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c3332de-a21c-4552-a037-c5665b4c0927","Type":"ContainerStarted","Data":"4d965e81d36f3e4a34bb6a72e431befe4a0da722d79f76892e4a756e28b37c2d"} Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.260049 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.261756 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.271791 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-9zkk7" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.271903 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.272115 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.272299 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.279877 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412048 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412360 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412447 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412463 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-config\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412487 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412517 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412547 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-220252b1-9b0a-492f-ad8a-322507778d98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-220252b1-9b0a-492f-ad8a-322507778d98\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.412578 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-kube-api-access-csgpm\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.513723 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.514638 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.514688 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-config\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.514711 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.514743 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.514793 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-220252b1-9b0a-492f-ad8a-322507778d98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-220252b1-9b0a-492f-ad8a-322507778d98\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.514812 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-kube-api-access-csgpm\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.514900 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.515112 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.516352 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.516971 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-config\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.519992 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.520079 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-220252b1-9b0a-492f-ad8a-322507778d98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-220252b1-9b0a-492f-ad8a-322507778d98\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b62ab03f2486e0853d46bdb7bef45a3d09c441a90011086fccaf98700251158e/globalmount\"" pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.530207 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.530375 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.533583 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-csgpm\" (UniqueName: \"kubernetes.io/projected/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-kube-api-access-csgpm\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.537413 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/9f2235b7-8f1a-4510-8ca8-ed784bf1aec1-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.569385 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-220252b1-9b0a-492f-ad8a-322507778d98\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-220252b1-9b0a-492f-ad8a-322507778d98\") pod \"ovsdbserver-sb-0\" (UID: \"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1\") " pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:20 crc kubenswrapper[4708]: I0227 17:14:20.588718 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.705020 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8"] Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.706978 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.711039 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-dockercfg-jpx44" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.711123 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca-bundle" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.711443 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-http" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.711587 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.711622 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-config" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.722350 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8"] Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.836735 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.836776 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.836869 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.836887 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phsx\" (UniqueName: \"kubernetes.io/projected/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-kube-api-access-8phsx\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.836928 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.869700 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk"] Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.870788 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.875660 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.876173 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.876381 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.889785 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk"] Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.942130 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.942206 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phsx\" (UniqueName: \"kubernetes.io/projected/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-kube-api-access-8phsx\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.942265 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.942348 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.942378 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.943485 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-config\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.943742 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.961067 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26"] Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.962097 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.965894 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.966042 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.966255 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-http\" (UniqueName: \"kubernetes.io/secret/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-distributor-http\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.978465 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phsx\" (UniqueName: \"kubernetes.io/projected/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-kube-api-access-8phsx\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:21 crc kubenswrapper[4708]: I0227 17:14:21.981585 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-distributor-grpc\" (UniqueName: \"kubernetes.io/secret/e9768cf3-76f8-46d6-bfc4-8536e88e92a3-cloudkitty-lokistack-distributor-grpc\") pod \"cloudkitty-lokistack-distributor-585d9bcbc-xjzz8\" (UID: \"e9768cf3-76f8-46d6-bfc4-8536e88e92a3\") " pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.018955 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.034315 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.048912 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.048955 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4a0e43-6399-4a19-97a2-6ecfa156222c-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.048998 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049103 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v444g\" (UniqueName: \"kubernetes.io/projected/0d4a0e43-6399-4a19-97a2-6ecfa156222c-kube-api-access-v444g\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049236 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049253 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049285 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt7ks\" (UniqueName: \"kubernetes.io/projected/0b7415cb-a36a-4035-bcfa-1454faaa3e95-kube-api-access-pt7ks\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049315 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049398 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049423 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.049504 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7415cb-a36a-4035-bcfa-1454faaa3e95-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150594 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150647 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v444g\" (UniqueName: \"kubernetes.io/projected/0d4a0e43-6399-4a19-97a2-6ecfa156222c-kube-api-access-v444g\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150692 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150710 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150728 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt7ks\" (UniqueName: \"kubernetes.io/projected/0b7415cb-a36a-4035-bcfa-1454faaa3e95-kube-api-access-pt7ks\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150749 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150786 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150804 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150856 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7415cb-a36a-4035-bcfa-1454faaa3e95-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150905 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.150924 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4a0e43-6399-4a19-97a2-6ecfa156222c-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.151833 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d4a0e43-6399-4a19-97a2-6ecfa156222c-config\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.152675 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.154108 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.154979 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0b7415cb-a36a-4035-bcfa-1454faaa3e95-config\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.157296 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-loki-s3" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.157745 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-http" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.158027 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-grpc" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.158072 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-querier-grpc" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.174154 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-query-frontend-http" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.182337 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.183026 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-grpc\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-querier-grpc\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.206911 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-querier-http\" (UniqueName: \"kubernetes.io/secret/0b7415cb-a36a-4035-bcfa-1454faaa3e95-cloudkitty-lokistack-querier-http\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.223664 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-http\" (UniqueName: \"kubernetes.io/secret/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-query-frontend-http\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.225404 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.226508 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-query-frontend-grpc\" (UniqueName: \"kubernetes.io/secret/0d4a0e43-6399-4a19-97a2-6ecfa156222c-cloudkitty-lokistack-query-frontend-grpc\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.228132 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.230494 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v444g\" (UniqueName: \"kubernetes.io/projected/0d4a0e43-6399-4a19-97a2-6ecfa156222c-kube-api-access-v444g\") pod \"cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26\" (UID: \"0d4a0e43-6399-4a19-97a2-6ecfa156222c\") " pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.239993 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-http" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.240588 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.241804 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway-ca-bundle" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.241883 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-gateway" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.241914 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"cloudkitty-lokistack-ca" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.241979 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-client-http" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.242011 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-gateway-dockercfg-wk84s" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.256549 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt7ks\" (UniqueName: \"kubernetes.io/projected/0b7415cb-a36a-4035-bcfa-1454faaa3e95-kube-api-access-pt7ks\") pod \"cloudkitty-lokistack-querier-58c84b5844-wb4dk\" (UID: \"0b7415cb-a36a-4035-bcfa-1454faaa3e95\") " pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.305560 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.342323 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.352932 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.354340 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.363831 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbvb9\" (UniqueName: \"kubernetes.io/projected/191b9cdf-6626-4c04-bc5e-c8585af9940d-kube-api-access-vbvb9\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.363938 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.363965 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.364035 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.364061 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.364081 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.364121 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.364155 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.364196 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.376390 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.469836 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.469950 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.469989 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470013 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470048 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470083 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470115 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470154 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470176 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470211 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470235 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470274 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vbvb9\" (UniqueName: \"kubernetes.io/projected/191b9cdf-6626-4c04-bc5e-c8585af9940d-kube-api-access-vbvb9\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470295 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470323 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p7m4\" (UniqueName: \"kubernetes.io/projected/1f8805bc-c67e-435a-8734-6a8e4f845e9f-kube-api-access-8p7m4\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470366 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470388 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470419 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.470455 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.471547 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.473324 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.474213 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.474958 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.475804 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.476143 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.477199 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/191b9cdf-6626-4c04-bc5e-c8585af9940d-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.477660 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/191b9cdf-6626-4c04-bc5e-c8585af9940d-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.492270 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.493792 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbvb9\" (UniqueName: \"kubernetes.io/projected/191b9cdf-6626-4c04-bc5e-c8585af9940d-kube-api-access-vbvb9\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-fn48d\" (UID: \"191b9cdf-6626-4c04-bc5e-c8585af9940d\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.571930 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.571999 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.572022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.572046 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.572074 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.572186 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p7m4\" (UniqueName: \"kubernetes.io/projected/1f8805bc-c67e-435a-8734-6a8e4f845e9f-kube-api-access-8p7m4\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.572218 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.572256 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.572289 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.573162 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rbac\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-rbac\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.574154 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-gateway-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.576146 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lokistack-gateway\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-lokistack-gateway\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.576580 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tenants\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-tenants\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.579717 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.581480 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-gateway-client-http\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-gateway-client-http\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.588272 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1f8805bc-c67e-435a-8734-6a8e4f845e9f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.593385 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-secret\" (UniqueName: \"kubernetes.io/secret/1f8805bc-c67e-435a-8734-6a8e4f845e9f-tls-secret\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.598088 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p7m4\" (UniqueName: \"kubernetes.io/projected/1f8805bc-c67e-435a-8734-6a8e4f845e9f-kube-api-access-8p7m4\") pod \"cloudkitty-lokistack-gateway-7f8685b49f-hbxzw\" (UID: \"1f8805bc-c67e-435a-8734-6a8e4f845e9f\") " pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.618090 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.672669 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.863856 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.865079 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.867196 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-grpc" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.869161 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-ingester-http" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.873558 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.940633 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.942282 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.944065 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-http" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.944366 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.954108 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.979770 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.980085 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238aef54-b0dd-495b-a5f8-66cc43b12088-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.980186 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.980284 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.980361 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgm97\" (UniqueName: \"kubernetes.io/projected/238aef54-b0dd-495b-a5f8-66cc43b12088-kube-api-access-kgm97\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.980462 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.980554 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:22 crc kubenswrapper[4708]: I0227 17:14:22.980647 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.010791 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.013429 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.016876 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-http" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.017371 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-index-gateway-grpc" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.018664 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082263 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082339 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238aef54-b0dd-495b-a5f8-66cc43b12088-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082383 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98fnc\" (UniqueName: \"kubernetes.io/projected/cb80bc89-9a5d-4ade-89d7-99d39732a907-kube-api-access-98fnc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082435 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082577 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082635 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082712 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082742 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.082793 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgm97\" (UniqueName: \"kubernetes.io/projected/238aef54-b0dd-495b-a5f8-66cc43b12088-kube-api-access-kgm97\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.083018 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb80bc89-9a5d-4ade-89d7-99d39732a907-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.083062 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.083118 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.083205 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.083253 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.083332 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.083644 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.084134 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/238aef54-b0dd-495b-a5f8-66cc43b12088-config\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.086042 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.086122 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.095596 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-http\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ingester-http\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.095683 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.099190 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgm97\" (UniqueName: \"kubernetes.io/projected/238aef54-b0dd-495b-a5f8-66cc43b12088-kube-api-access-kgm97\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.101191 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ingester-grpc\" (UniqueName: \"kubernetes.io/secret/238aef54-b0dd-495b-a5f8-66cc43b12088-cloudkitty-lokistack-ingester-grpc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.109095 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.109579 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"cloudkitty-lokistack-ingester-0\" (UID: \"238aef54-b0dd-495b-a5f8-66cc43b12088\") " pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185598 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98fnc\" (UniqueName: \"kubernetes.io/projected/cb80bc89-9a5d-4ade-89d7-99d39732a907-kube-api-access-98fnc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185653 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185704 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185745 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185770 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185790 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185885 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb80bc89-9a5d-4ade-89d7-99d39732a907-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185967 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.185992 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.186190 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56ea2d3-2905-47bd-b819-41705a3b858f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.186382 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.186439 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.186529 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.186645 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.186678 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ft9c\" (UniqueName: \"kubernetes.io/projected/c56ea2d3-2905-47bd-b819-41705a3b858f-kube-api-access-9ft9c\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.187229 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.188976 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.194324 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb80bc89-9a5d-4ade-89d7-99d39732a907-config\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.196764 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-http\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-compactor-http\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.198539 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-compactor-grpc\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-lokistack-compactor-grpc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.199612 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/cb80bc89-9a5d-4ade-89d7-99d39732a907-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.203901 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98fnc\" (UniqueName: \"kubernetes.io/projected/cb80bc89-9a5d-4ade-89d7-99d39732a907-kube-api-access-98fnc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.205305 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"cloudkitty-lokistack-compactor-0\" (UID: \"cb80bc89-9a5d-4ade-89d7-99d39732a907\") " pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.287856 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.287909 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.287975 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56ea2d3-2905-47bd-b819-41705a3b858f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.288016 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.288037 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.288077 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.288096 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ft9c\" (UniqueName: \"kubernetes.io/projected/c56ea2d3-2905-47bd-b819-41705a3b858f-kube-api-access-9ft9c\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.288484 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.289013 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c56ea2d3-2905-47bd-b819-41705a3b858f-config\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.289350 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-ca-bundle\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.293319 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-http\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-index-gateway-http\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.294037 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-loki-s3\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-loki-s3\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.296088 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloudkitty-lokistack-index-gateway-grpc\" (UniqueName: \"kubernetes.io/secret/c56ea2d3-2905-47bd-b819-41705a3b858f-cloudkitty-lokistack-index-gateway-grpc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.303753 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ft9c\" (UniqueName: \"kubernetes.io/projected/c56ea2d3-2905-47bd-b819-41705a3b858f-kube-api-access-9ft9c\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.308825 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.327345 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"cloudkitty-lokistack-index-gateway-0\" (UID: \"c56ea2d3-2905-47bd-b819-41705a3b858f\") " pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.332400 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:14:23 crc kubenswrapper[4708]: I0227 17:14:23.599314 4708 scope.go:117] "RemoveContainer" containerID="cc23d8027d898e28570823e7a4b6dc0a8dcf81eabc27455dc4775141efb2084c" Feb 27 17:14:37 crc kubenswrapper[4708]: I0227 17:14:37.407838 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:14:42 crc kubenswrapper[4708]: I0227 17:14:42.322949 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" podUID="dde28522-3138-4c50-b3c5-1e26d61b96e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:14:42 crc kubenswrapper[4708]: I0227 17:14:42.995382 4708 patch_prober.go:28] interesting pod/nmstate-webhook-786f45cff4-4mk88 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.56:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Feb 27 17:14:42 crc kubenswrapper[4708]: I0227 17:14:42.995445 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-786f45cff4-4mk88" podUID="6c61d3bb-a5e6-4206-a47a-9d6fcba04da4" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.56:9443/readyz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Feb 27 17:14:43 crc kubenswrapper[4708]: I0227 17:14:43.012020 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" podUID="dde28522-3138-4c50-b3c5-1e26d61b96e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:14:44 crc kubenswrapper[4708]: E0227 17:14:44.976536 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 27 17:14:44 crc kubenswrapper[4708]: E0227 17:14:44.976800 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rht9n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cell1-server-0_openstack(eb2fe191-cb57-46a6-9797-c9890640ff74): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:14:44 crc kubenswrapper[4708]: E0227 17:14:44.978397 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-cell1-server-0" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" Feb 27 17:14:44 crc kubenswrapper[4708]: E0227 17:14:44.992361 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified" Feb 27 17:14:44 crc kubenswrapper[4708]: E0227 17:14:44.992587 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:setup-container,Image:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,Command:[sh -c cp /tmp/erlang-cookie-secret/.erlang.cookie /var/lib/rabbitmq/.erlang.cookie && chmod 600 /var/lib/rabbitmq/.erlang.cookie ; cp /tmp/rabbitmq-plugins/enabled_plugins /operator/enabled_plugins ; echo '[default]' > /var/lib/rabbitmq/.rabbitmqadmin.conf && sed -e 's/default_user/username/' -e 's/default_pass/password/' /tmp/default_user.conf >> /var/lib/rabbitmq/.rabbitmqadmin.conf && chmod 600 /var/lib/rabbitmq/.rabbitmqadmin.conf ; sleep 30],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:plugins-conf,ReadOnly:false,MountPath:/tmp/rabbitmq-plugins/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-erlang-cookie,ReadOnly:false,MountPath:/var/lib/rabbitmq/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:erlang-cookie-secret,ReadOnly:false,MountPath:/tmp/erlang-cookie-secret/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-plugins,ReadOnly:false,MountPath:/operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:persistence,ReadOnly:false,MountPath:/var/lib/rabbitmq/mnesia/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:rabbitmq-confd,ReadOnly:false,MountPath:/tmp/default_user.conf,SubPath:default_user.conf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4tq2z,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-server-0_openstack(32b89444-fadf-43c8-b552-e5071fc91481): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:14:44 crc kubenswrapper[4708]: E0227 17:14:44.994514 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/rabbitmq-server-0" podUID="32b89444-fadf-43c8-b552-e5071fc91481" Feb 27 17:14:45 crc kubenswrapper[4708]: E0227 17:14:45.080546 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-cell1-server-0" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" Feb 27 17:14:45 crc kubenswrapper[4708]: E0227 17:14:45.080892 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"setup-container\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified\\\"\"" pod="openstack/rabbitmq-server-0" podUID="32b89444-fadf-43c8-b552-e5071fc91481" Feb 27 17:14:45 crc kubenswrapper[4708]: W0227 17:14:45.088916 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83739fce_8870_491c_844b_9674e73b937a.slice/crio-7da37157e1c99d3ee20e321ae7211883f83203e17c4d2ac27961138f3b388681 WatchSource:0}: Error finding container 7da37157e1c99d3ee20e321ae7211883f83203e17c4d2ac27961138f3b388681: Status 404 returned error can't find the container with id 7da37157e1c99d3ee20e321ae7211883f83203e17c4d2ac27961138f3b388681 Feb 27 17:14:46 crc kubenswrapper[4708]: I0227 17:14:46.092186 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83739fce-8870-491c-844b-9674e73b937a","Type":"ContainerStarted","Data":"7da37157e1c99d3ee20e321ae7211883f83203e17c4d2ac27961138f3b388681"} Feb 27 17:14:46 crc kubenswrapper[4708]: E0227 17:14:46.629492 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 17:14:46 crc kubenswrapper[4708]: E0227 17:14:46.629646 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cfc4l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-58hf7_openstack(ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:14:46 crc kubenswrapper[4708]: E0227 17:14:46.631503 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" podUID="ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf" Feb 27 17:14:46 crc kubenswrapper[4708]: E0227 17:14:46.657314 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 17:14:46 crc kubenswrapper[4708]: E0227 17:14:46.657482 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xz5qc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-x9qft_openstack(9192eb25-4187-4e0e-87ed-c98c9c6f7fdb): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:14:46 crc kubenswrapper[4708]: E0227 17:14:46.658732 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" podUID="9192eb25-4187-4e0e-87ed-c98c9c6f7fdb" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.060571 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-mariadb:current-podified" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.061083 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:mysql-bootstrap,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[bash /var/lib/operator-scripts/mysql_bootstrap.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:True,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mysql-db,ReadOnly:false,MountPath:/var/lib/mysql,SubPath:mysql,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-default,ReadOnly:true,MountPath:/var/lib/config-data/default,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data-generated,ReadOnly:false,MountPath:/var/lib/config-data/generated,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:operator-scripts,ReadOnly:true,MountPath:/var/lib/operator-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kolla-config,ReadOnly:true,MountPath:/var/lib/kolla/config_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5gk67,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-galera-0_openstack(6f6f6892-d9d6-4f71-bc65-8e47c15bddc1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.062341 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/openstack-galera-0" podUID="6f6f6892-d9d6-4f71-bc65-8e47c15bddc1" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.124476 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mysql-bootstrap\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-mariadb:current-podified\\\"\"" pod="openstack/openstack-galera-0" podUID="6f6f6892-d9d6-4f71-bc65-8e47c15bddc1" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.131132 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.131306 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-55nfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-2dpsz_openstack(a8a6341a-9926-4695-9c8d-ffe8d2cbb52d): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.133022 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" podUID="a8a6341a-9926-4695-9c8d-ffe8d2cbb52d" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.176799 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.177105 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4f59t,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-g6vsk_openstack(0e525119-22e4-4879-bec2-b7d830c00fcf): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:14:49 crc kubenswrapper[4708]: E0227 17:14:49.178354 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" podUID="0e525119-22e4-4879-bec2-b7d830c00fcf" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.217118 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.234536 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.339951 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-dns-svc\") pod \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.340483 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xz5qc\" (UniqueName: \"kubernetes.io/projected/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-kube-api-access-xz5qc\") pod \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.340631 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-config\") pod \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.340686 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-config\") pod \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\" (UID: \"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb\") " Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.340712 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfc4l\" (UniqueName: \"kubernetes.io/projected/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-kube-api-access-cfc4l\") pod \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\" (UID: \"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf\") " Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.342111 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9192eb25-4187-4e0e-87ed-c98c9c6f7fdb" (UID: "9192eb25-4187-4e0e-87ed-c98c9c6f7fdb"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.343478 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-config" (OuterVolumeSpecName: "config") pod "9192eb25-4187-4e0e-87ed-c98c9c6f7fdb" (UID: "9192eb25-4187-4e0e-87ed-c98c9c6f7fdb"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.344836 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-config" (OuterVolumeSpecName: "config") pod "ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf" (UID: "ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.351991 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-kube-api-access-cfc4l" (OuterVolumeSpecName: "kube-api-access-cfc4l") pod "ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf" (UID: "ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf"). InnerVolumeSpecName "kube-api-access-cfc4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.359714 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-kube-api-access-xz5qc" (OuterVolumeSpecName: "kube-api-access-xz5qc") pod "9192eb25-4187-4e0e-87ed-c98c9c6f7fdb" (UID: "9192eb25-4187-4e0e-87ed-c98c9c6f7fdb"). InnerVolumeSpecName "kube-api-access-xz5qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.443805 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.443839 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.443875 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfc4l\" (UniqueName: \"kubernetes.io/projected/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf-kube-api-access-cfc4l\") on node \"crc\" DevicePath \"\"" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.443886 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.443895 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xz5qc\" (UniqueName: \"kubernetes.io/projected/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb-kube-api-access-xz5qc\") on node \"crc\" DevicePath \"\"" Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.815564 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/alertmanager-metric-storage-0"] Feb 27 17:14:49 crc kubenswrapper[4708]: I0227 17:14:49.866918 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6zlsq"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.132420 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"6cc07076-e637-443a-85c1-7b72beeb6cc7","Type":"ContainerStarted","Data":"264abb564895c4cbb02949db8f810d02a7e51cc06dc684fcefb1dd36347d10bd"} Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.135625 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"0c436943-14ee-474c-a393-c067fd0dec97","Type":"ContainerStarted","Data":"af8109743a9513ee61281905fa2d2b679c65dc3dad4385e18691829cdced729a"} Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.135684 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.138517 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq" event={"ID":"2410b28c-0b9c-4da0-826a-bcbbab63a292","Type":"ContainerStarted","Data":"2ea705d13471fe4c0d8d083a080ca170db6951b4db9e7a04edc700f3adba2031"} Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.142566 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" event={"ID":"ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf","Type":"ContainerDied","Data":"2f998b2f8b21cfd2223a58dbd994517a51815c59b3232e5bee790e273602dc3c"} Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.142623 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-58hf7" Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.148561 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.152931 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-x9qft" event={"ID":"9192eb25-4187-4e0e-87ed-c98c9c6f7fdb","Type":"ContainerDied","Data":"c197358944d140444e56b578fa7ee69bb6b1d37d2c3e7699c45f10969a784889"} Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.160212 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=10.78114607 podStartE2EDuration="41.160196844s" podCreationTimestamp="2026-02-27 17:14:09 +0000 UTC" firstStartedPulling="2026-02-27 17:14:18.86966505 +0000 UTC m=+1257.385462667" lastFinishedPulling="2026-02-27 17:14:49.248715854 +0000 UTC m=+1287.764513441" observedRunningTime="2026-02-27 17:14:50.153347362 +0000 UTC m=+1288.669144949" watchObservedRunningTime="2026-02-27 17:14:50.160196844 +0000 UTC m=+1288.675994431" Feb 27 17:14:50 crc kubenswrapper[4708]: E0227 17:14:50.165359 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" podUID="0e525119-22e4-4879-bec2-b7d830c00fcf" Feb 27 17:14:50 crc kubenswrapper[4708]: E0227 17:14:50.165536 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" podUID="a8a6341a-9926-4695-9c8d-ffe8d2cbb52d" Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.222946 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.264580 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58hf7"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.278795 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-ingester-0"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.290633 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-58hf7"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.298061 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-compactor-0"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.320550 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.326453 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.334718 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.367045 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.376958 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.385996 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x9qft"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.391626 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.396935 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-lokistack-index-gateway-0"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.402290 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-x9qft"] Feb 27 17:14:50 crc kubenswrapper[4708]: I0227 17:14:50.407034 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Feb 27 17:14:50 crc kubenswrapper[4708]: W0227 17:14:50.510997 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode9768cf3_76f8_46d6_bfc4_8536e88e92a3.slice/crio-14363a58b2b03b9115fad3032d3df56acbfe41014975e0685b1d9cb55f66897b WatchSource:0}: Error finding container 14363a58b2b03b9115fad3032d3df56acbfe41014975e0685b1d9cb55f66897b: Status 404 returned error can't find the container with id 14363a58b2b03b9115fad3032d3df56acbfe41014975e0685b1d9cb55f66897b Feb 27 17:14:50 crc kubenswrapper[4708]: W0227 17:14:50.536050 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc129cc00_13ca_4502_aa1b_866133b164a9.slice/crio-f99dd294c93531ac750194c523b6d510397a9588a4dda378073bae6d42b4ecc4 WatchSource:0}: Error finding container f99dd294c93531ac750194c523b6d510397a9588a4dda378073bae6d42b4ecc4: Status 404 returned error can't find the container with id f99dd294c93531ac750194c523b6d510397a9588a4dda378073bae6d42b4ecc4 Feb 27 17:14:50 crc kubenswrapper[4708]: W0227 17:14:50.538071 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f8805bc_c67e_435a_8734_6a8e4f845e9f.slice/crio-e84876576f88757ec99bb5cc561a40bd299996ea3bc2c7cf124f9a989f873332 WatchSource:0}: Error finding container e84876576f88757ec99bb5cc561a40bd299996ea3bc2c7cf124f9a989f873332: Status 404 returned error can't find the container with id e84876576f88757ec99bb5cc561a40bd299996ea3bc2c7cf124f9a989f873332 Feb 27 17:14:50 crc kubenswrapper[4708]: W0227 17:14:50.544376 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb80bc89_9a5d_4ade_89d7_99d39732a907.slice/crio-d4bea91a4697782e49345f68b083a7d810cb3487ca5dfee9aa619eb667bb2f80 WatchSource:0}: Error finding container d4bea91a4697782e49345f68b083a7d810cb3487ca5dfee9aa619eb667bb2f80: Status 404 returned error can't find the container with id d4bea91a4697782e49345f68b083a7d810cb3487ca5dfee9aa619eb667bb2f80 Feb 27 17:14:50 crc kubenswrapper[4708]: W0227 17:14:50.550971 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode2314c35_5338_4db2_a705_53cbc737f9a1.slice/crio-aa0f5ee2a574e56d068f25d0ac8534433da1fdde48714f17148b1eefed8934fb WatchSource:0}: Error finding container aa0f5ee2a574e56d068f25d0ac8534433da1fdde48714f17148b1eefed8934fb: Status 404 returned error can't find the container with id aa0f5ee2a574e56d068f25d0ac8534433da1fdde48714f17148b1eefed8934fb Feb 27 17:14:50 crc kubenswrapper[4708]: E0227 17:14:50.555675 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ovsdbserver-nb,Image:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,Command:[/usr/bin/dumb-init],Args:[/usr/local/bin/container-scripts/setup.sh],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n656h575h67bh686h5d5hb4h5b4h85h7bh57bh7ch95h58fh565hfch4h9fh665hdfh55chfbh5c8hd4h688h589hb6h9hcch656h667h669h589q,ValueFrom:nil,},EnvVar{Name:OVN_LOGDIR,Value:/tmp,ValueFrom:nil,},EnvVar{Name:OVN_RUNDIR,Value:/tmp,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovndbcluster-nb-etc-ovn,ReadOnly:false,MountPath:/etc/ovn,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndb.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovndb.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btq8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/usr/local/bin/container-scripts/cleanup.sh],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/pidof ovsdb-server],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:20,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(e2314c35-5338-4db2-a705-53cbc737f9a1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:14:50 crc kubenswrapper[4708]: E0227 17:14:50.557638 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openstack-network-exporter,Image:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,Command:[/app/openstack-network-exporter],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPENSTACK_NETWORK_EXPORTER_YAML,Value:/etc/config/openstack-network-exporter.yaml,ValueFrom:nil,},EnvVar{Name:CONFIG_HASH,Value:n656h575h67bh686h5d5hb4h5b4h85h7bh57bh7ch95h58fh565hfch4h9fh665hdfh55chfbh5c8hd4h688h589hb6h9hcch656h667h669h589q,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovsdb-rundir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovnmetrics.crt,SubPath:tls.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/private/ovnmetrics.key,SubPath:tls.key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-certs-tls-certs,ReadOnly:true,MountPath:/etc/pki/tls/certs/ovndbca.crt,SubPath:ca.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btq8p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovsdbserver-nb-0_openstack(e2314c35-5338-4db2-a705-53cbc737f9a1): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Feb 27 17:14:50 crc kubenswrapper[4708]: E0227 17:14:50.558761 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ErrImagePull: \"pull QPS exceeded\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ErrImagePull: \"pull QPS exceeded\"]" pod="openstack/ovsdbserver-nb-0" podUID="e2314c35-5338-4db2-a705-53cbc737f9a1" Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.157227 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e2314c35-5338-4db2-a705-53cbc737f9a1","Type":"ContainerStarted","Data":"aa0f5ee2a574e56d068f25d0ac8534433da1fdde48714f17148b1eefed8934fb"} Feb 27 17:14:51 crc kubenswrapper[4708]: E0227 17:14:51.161283 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="e2314c35-5338-4db2-a705-53cbc737f9a1" Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.162637 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"238aef54-b0dd-495b-a5f8-66cc43b12088","Type":"ContainerStarted","Data":"e91ef387d5068a662be6174301bf9edeb763fc9ee61fe8c8a3a63ea29e1e4749"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.164199 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" event={"ID":"1f8805bc-c67e-435a-8734-6a8e4f845e9f","Type":"ContainerStarted","Data":"e84876576f88757ec99bb5cc561a40bd299996ea3bc2c7cf124f9a989f873332"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.165229 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1","Type":"ContainerStarted","Data":"c2a7065b2eb9a6a97fb3d890f174dac9e6d2ec0a803698bb930bf2d0e236d3d2"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.167436 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" event={"ID":"191b9cdf-6626-4c04-bc5e-c8585af9940d","Type":"ContainerStarted","Data":"27cb6b0339d90ed434084afb127e8137ddaf94aff20ba011a596257cf99b65cb"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.169404 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerStarted","Data":"f99dd294c93531ac750194c523b6d510397a9588a4dda378073bae6d42b4ecc4"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.171046 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" event={"ID":"0d4a0e43-6399-4a19-97a2-6ecfa156222c","Type":"ContainerStarted","Data":"7e6fe8d75db62cc8eb8ad9be975ecdcfdbdcfd1f47420299d3b34d88cbc76b1c"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.172649 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" event={"ID":"0b7415cb-a36a-4035-bcfa-1454faaa3e95","Type":"ContainerStarted","Data":"f03848016980bbb9ab755268330ec68c9750e81a50c4e374fa5c1d6a39cf9921"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.180528 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" event={"ID":"e9768cf3-76f8-46d6-bfc4-8536e88e92a3","Type":"ContainerStarted","Data":"14363a58b2b03b9115fad3032d3df56acbfe41014975e0685b1d9cb55f66897b"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.183757 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"c56ea2d3-2905-47bd-b819-41705a3b858f","Type":"ContainerStarted","Data":"e98bb0bddc12ea52ff8448da75cc8bcdd92103c28150b79448ef46f57ecf1739"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.185568 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c3332de-a21c-4552-a037-c5665b4c0927","Type":"ContainerStarted","Data":"6a89560c88542f1812c7b98bf68a69eda7237c410eaaede102408e207897975e"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.186747 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"cb80bc89-9a5d-4ade-89d7-99d39732a907","Type":"ContainerStarted","Data":"d4bea91a4697782e49345f68b083a7d810cb3487ca5dfee9aa619eb667bb2f80"} Feb 27 17:14:51 crc kubenswrapper[4708]: I0227 17:14:51.233563 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-k2qzb"] Feb 27 17:14:51 crc kubenswrapper[4708]: W0227 17:14:51.371755 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcdfec2dc_369d_405a_a7c4_95c4b5a08d8a.slice/crio-51828806bead2fe41629c34d4c97ca8cf9ae040859fda8e0a2efe74c8cc67708 WatchSource:0}: Error finding container 51828806bead2fe41629c34d4c97ca8cf9ae040859fda8e0a2efe74c8cc67708: Status 404 returned error can't find the container with id 51828806bead2fe41629c34d4c97ca8cf9ae040859fda8e0a2efe74c8cc67708 Feb 27 17:14:52 crc kubenswrapper[4708]: I0227 17:14:52.198955 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83739fce-8870-491c-844b-9674e73b937a","Type":"ContainerStarted","Data":"464ab374952e9ea798847dd85f9ad750f5e3919a70afea2e2dfeee4d20ae9791"} Feb 27 17:14:52 crc kubenswrapper[4708]: I0227 17:14:52.199053 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 17:14:52 crc kubenswrapper[4708]: I0227 17:14:52.201967 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k2qzb" event={"ID":"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a","Type":"ContainerStarted","Data":"51828806bead2fe41629c34d4c97ca8cf9ae040859fda8e0a2efe74c8cc67708"} Feb 27 17:14:52 crc kubenswrapper[4708]: E0227 17:14:52.205169 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ovsdbserver-nb\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified\\\"\", failed to \"StartContainer\" for \"openstack-network-exporter\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified\\\"\"]" pod="openstack/ovsdbserver-nb-0" podUID="e2314c35-5338-4db2-a705-53cbc737f9a1" Feb 27 17:14:52 crc kubenswrapper[4708]: I0227 17:14:52.230319 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=34.669022762 podStartE2EDuration="41.230301368s" podCreationTimestamp="2026-02-27 17:14:11 +0000 UTC" firstStartedPulling="2026-02-27 17:14:45.094297638 +0000 UTC m=+1283.610095235" lastFinishedPulling="2026-02-27 17:14:51.655576204 +0000 UTC m=+1290.171373841" observedRunningTime="2026-02-27 17:14:52.226733858 +0000 UTC m=+1290.742531455" watchObservedRunningTime="2026-02-27 17:14:52.230301368 +0000 UTC m=+1290.746098975" Feb 27 17:14:52 crc kubenswrapper[4708]: I0227 17:14:52.245849 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9192eb25-4187-4e0e-87ed-c98c9c6f7fdb" path="/var/lib/kubelet/pods/9192eb25-4187-4e0e-87ed-c98c9c6f7fdb/volumes" Feb 27 17:14:52 crc kubenswrapper[4708]: I0227 17:14:52.246360 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf" path="/var/lib/kubelet/pods/ad262df7-116e-4dd5-9bc4-1e1bf9ee66bf/volumes" Feb 27 17:14:55 crc kubenswrapper[4708]: I0227 17:14:55.269211 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Feb 27 17:14:56 crc kubenswrapper[4708]: I0227 17:14:56.233401 4708 generic.go:334] "Generic (PLEG): container finished" podID="4c3332de-a21c-4552-a037-c5665b4c0927" containerID="6a89560c88542f1812c7b98bf68a69eda7237c410eaaede102408e207897975e" exitCode=0 Feb 27 17:14:56 crc kubenswrapper[4708]: I0227 17:14:56.238050 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c3332de-a21c-4552-a037-c5665b4c0927","Type":"ContainerDied","Data":"6a89560c88542f1812c7b98bf68a69eda7237c410eaaede102408e207897975e"} Feb 27 17:14:57 crc kubenswrapper[4708]: I0227 17:14:57.246633 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerStarted","Data":"2e7a064c26d17a9d34a9bfa4a83396738620cb57acbf204cdfae3a3489c41b06"} Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.146483 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt"] Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.148214 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.152664 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.152933 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.179996 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt"] Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.268953 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-config-volume\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.269632 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7s9h\" (UniqueName: \"kubernetes.io/projected/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-kube-api-access-r7s9h\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.269677 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-secret-volume\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.371613 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7s9h\" (UniqueName: \"kubernetes.io/projected/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-kube-api-access-r7s9h\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.371696 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-secret-volume\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.371817 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-config-volume\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.372893 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-config-volume\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.377457 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-secret-volume\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.399234 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7s9h\" (UniqueName: \"kubernetes.io/projected/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-kube-api-access-r7s9h\") pod \"collect-profiles-29536875-lrdlt\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:00 crc kubenswrapper[4708]: I0227 17:15:00.589110 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:01 crc kubenswrapper[4708]: I0227 17:15:01.238472 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt"] Feb 27 17:15:01 crc kubenswrapper[4708]: W0227 17:15:01.257975 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae8ae876_ebb6_4de9_bacc_1efece3d20a0.slice/crio-a5a84c6573f0dd5d26d4db28e27643e4e275fd89965f2e68eed845f4c1a35d21 WatchSource:0}: Error finding container a5a84c6573f0dd5d26d4db28e27643e4e275fd89965f2e68eed845f4c1a35d21: Status 404 returned error can't find the container with id a5a84c6573f0dd5d26d4db28e27643e4e275fd89965f2e68eed845f4c1a35d21 Feb 27 17:15:01 crc kubenswrapper[4708]: I0227 17:15:01.300507 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" event={"ID":"ae8ae876-ebb6-4de9-bacc-1efece3d20a0","Type":"ContainerStarted","Data":"a5a84c6573f0dd5d26d4db28e27643e4e275fd89965f2e68eed845f4c1a35d21"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.312010 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1","Type":"ContainerStarted","Data":"0a0bcdc702f0bcee556f92c96e5c69252a8a709c06734a62e64ad595ff0ba064"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.313485 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" event={"ID":"0d4a0e43-6399-4a19-97a2-6ecfa156222c","Type":"ContainerStarted","Data":"bdbf353e5208d18ad0af677a7d1fb1f108da90d6867d65ec8b78ecff7b554c79"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.313993 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.316045 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" event={"ID":"191b9cdf-6626-4c04-bc5e-c8585af9940d","Type":"ContainerStarted","Data":"de635877a44be0dd6ff629bd797a012468ac1e85591161c560abf3fc17fef804"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.316094 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.318159 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k2qzb" event={"ID":"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a","Type":"ContainerStarted","Data":"1a529ad4063e884e4804b1b948c013f2b72497f63177270bb6336171b52db845"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.318396 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" podUID="191b9cdf-6626-4c04-bc5e-c8585af9940d" containerName="gateway" probeResult="failure" output="Get \"https://10.217.0.126:8081/ready\": dial tcp 10.217.0.126:8081: connect: connection refused" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.320928 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-compactor-0" event={"ID":"cb80bc89-9a5d-4ade-89d7-99d39732a907","Type":"ContainerStarted","Data":"0824c4e9b74ed4f671a3d622446e1ad8b577f67f3438485960895e949804dc45"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.321200 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.325649 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" event={"ID":"1f8805bc-c67e-435a-8734-6a8e4f845e9f","Type":"ContainerStarted","Data":"60b16eddfc5b734856de7704f43409004c5b63d8c47925a925d475702b416961"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.326148 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.328627 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"4c3332de-a21c-4552-a037-c5665b4c0927","Type":"ContainerStarted","Data":"8c5c4853ded3bcc34ed90c26729eee88eb70a2294c3405b1f344db35d0edf60e"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.330367 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" event={"ID":"0b7415cb-a36a-4035-bcfa-1454faaa3e95","Type":"ContainerStarted","Data":"2ae8d228f23704613ccb7400ca38cc3ed6b1f794706ff2e2b2d5b6d555f26a1c"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.331002 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.331734 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.334603 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" event={"ID":"e9768cf3-76f8-46d6-bfc4-8536e88e92a3","Type":"ContainerStarted","Data":"bf15f2f32d215df18a7e6881a8787dad19b98eda85cff19cf325d0e586ceb1c4"} Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.334932 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.351803 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" podStartSLOduration=31.636499332 podStartE2EDuration="41.351787665s" podCreationTimestamp="2026-02-27 17:14:21 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.520175593 +0000 UTC m=+1289.035973180" lastFinishedPulling="2026-02-27 17:15:00.235463916 +0000 UTC m=+1298.751261513" observedRunningTime="2026-02-27 17:15:02.344724915 +0000 UTC m=+1300.860522502" watchObservedRunningTime="2026-02-27 17:15:02.351787665 +0000 UTC m=+1300.867585252" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.376881 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" podStartSLOduration=31.187963298 podStartE2EDuration="41.376865357s" podCreationTimestamp="2026-02-27 17:14:21 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.519567486 +0000 UTC m=+1289.035365073" lastFinishedPulling="2026-02-27 17:15:00.708469555 +0000 UTC m=+1299.224267132" observedRunningTime="2026-02-27 17:15:02.374016676 +0000 UTC m=+1300.889814253" watchObservedRunningTime="2026-02-27 17:15:02.376865357 +0000 UTC m=+1300.892662944" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.379076 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.424659 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dpsz"] Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.457656 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" podStartSLOduration=31.270102528 podStartE2EDuration="41.457638888s" podCreationTimestamp="2026-02-27 17:14:21 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.519105623 +0000 UTC m=+1289.034903210" lastFinishedPulling="2026-02-27 17:15:00.706641983 +0000 UTC m=+1299.222439570" observedRunningTime="2026-02-27 17:15:02.450779094 +0000 UTC m=+1300.966576681" watchObservedRunningTime="2026-02-27 17:15:02.457638888 +0000 UTC m=+1300.973436475" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.490552 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mtzfr"] Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.491944 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.492500 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-compactor-0" podStartSLOduration=31.332359115 podStartE2EDuration="41.492491097s" podCreationTimestamp="2026-02-27 17:14:21 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.54750267 +0000 UTC m=+1289.063300257" lastFinishedPulling="2026-02-27 17:15:00.707634662 +0000 UTC m=+1299.223432239" observedRunningTime="2026-02-27 17:15:02.486089115 +0000 UTC m=+1301.001886702" watchObservedRunningTime="2026-02-27 17:15:02.492491097 +0000 UTC m=+1301.008288684" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.534038 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mtzfr"] Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.535516 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-hbxzw" podStartSLOduration=30.908036087 podStartE2EDuration="40.535499547s" podCreationTimestamp="2026-02-27 17:14:22 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.5450272 +0000 UTC m=+1289.060824787" lastFinishedPulling="2026-02-27 17:15:00.17249066 +0000 UTC m=+1298.688288247" observedRunningTime="2026-02-27 17:15:02.508184822 +0000 UTC m=+1301.023982409" watchObservedRunningTime="2026-02-27 17:15:02.535499547 +0000 UTC m=+1301.051297134" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.562261 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=24.126611615 podStartE2EDuration="54.562242936s" podCreationTimestamp="2026-02-27 17:14:08 +0000 UTC" firstStartedPulling="2026-02-27 17:14:18.891469982 +0000 UTC m=+1257.407267609" lastFinishedPulling="2026-02-27 17:14:49.327101353 +0000 UTC m=+1287.842898930" observedRunningTime="2026-02-27 17:15:02.545745958 +0000 UTC m=+1301.061543545" watchObservedRunningTime="2026-02-27 17:15:02.562242936 +0000 UTC m=+1301.078040513" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.605625 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" podStartSLOduration=30.419957488 podStartE2EDuration="40.605607876s" podCreationTimestamp="2026-02-27 17:14:22 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.523197188 +0000 UTC m=+1289.038994765" lastFinishedPulling="2026-02-27 17:15:00.708847566 +0000 UTC m=+1299.224645153" observedRunningTime="2026-02-27 17:15:02.591803665 +0000 UTC m=+1301.107601252" watchObservedRunningTime="2026-02-27 17:15:02.605607876 +0000 UTC m=+1301.121405453" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.631344 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.631423 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-config\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.631455 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvnmv\" (UniqueName: \"kubernetes.io/projected/c3bed1ce-0365-4baa-9f88-d3052d1f86db-kube-api-access-kvnmv\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.687755 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-gateway-7f8685b49f-fn48d" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.732914 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.732976 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-config\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.733035 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvnmv\" (UniqueName: \"kubernetes.io/projected/c3bed1ce-0365-4baa-9f88-d3052d1f86db-kube-api-access-kvnmv\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.734158 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-dns-svc\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.734657 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-config\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.734683 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-b7fvz"] Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.735799 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.739078 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.775004 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b7fvz"] Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.779350 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvnmv\" (UniqueName: \"kubernetes.io/projected/c3bed1ce-0365-4baa-9f88-d3052d1f86db-kube-api-access-kvnmv\") pod \"dnsmasq-dns-7cb5889db5-mtzfr\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.827420 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.835950 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdmmd\" (UniqueName: \"kubernetes.io/projected/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-kube-api-access-bdmmd\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.836035 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-ovn-rundir\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.836130 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-ovs-rundir\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.836156 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-config\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.836179 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-combined-ca-bundle\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.836233 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.940035 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdmmd\" (UniqueName: \"kubernetes.io/projected/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-kube-api-access-bdmmd\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.940291 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-ovn-rundir\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.940377 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-ovs-rundir\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.940399 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-config\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.940423 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-combined-ca-bundle\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.940472 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.941415 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-ovs-rundir\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.941747 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-ovn-rundir\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.947516 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-config\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.952673 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-combined-ca-bundle\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.957964 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g6vsk"] Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.960114 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:02 crc kubenswrapper[4708]: I0227 17:15:02.985991 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdmmd\" (UniqueName: \"kubernetes.io/projected/7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d-kube-api-access-bdmmd\") pod \"ovn-controller-metrics-b7fvz\" (UID: \"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d\") " pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.023916 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-5psm7"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.025476 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: W0227 17:15:03.029351 4708 reflector.go:561] object-"openstack"/"ovsdbserver-nb": failed to list *v1.ConfigMap: configmaps "ovsdbserver-nb" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openstack": no relationship found between node 'crc' and this object Feb 27 17:15:03 crc kubenswrapper[4708]: E0227 17:15:03.029390 4708 reflector.go:158] "Unhandled Error" err="object-\"openstack\"/\"ovsdbserver-nb\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovsdbserver-nb\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openstack\": no relationship found between node 'crc' and this object" logger="UnhandledError" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.063689 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-b7fvz" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.125947 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-5psm7"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.161051 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.161104 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-config\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.161144 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4mx\" (UniqueName: \"kubernetes.io/projected/602b65d5-0ce1-4b43-b5dd-54df670c7a22-kube-api-access-dw4mx\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.161187 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.262283 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.262751 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-config\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.262827 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw4mx\" (UniqueName: \"kubernetes.io/projected/602b65d5-0ce1-4b43-b5dd-54df670c7a22-kube-api-access-dw4mx\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.262939 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.263731 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-dns-svc\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.264589 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-config\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.291113 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw4mx\" (UniqueName: \"kubernetes.io/projected/602b65d5-0ce1-4b43-b5dd-54df670c7a22-kube-api-access-dw4mx\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.312155 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.327447 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-5psm7"] Feb 27 17:15:03 crc kubenswrapper[4708]: E0227 17:15:03.334335 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ovsdbserver-nb], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" podUID="602b65d5-0ce1-4b43-b5dd-54df670c7a22" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.419894 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-698758b865-svwxj"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.421528 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.430484 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.441287 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1","Type":"ContainerStarted","Data":"01c295ff638a2c471b2f5aa983b5d0fc6687c6bd3119ecf7efc71cde8eee6dd2"} Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.455773 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-svwxj"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.465479 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-config\") pod \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.465610 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55nfg\" (UniqueName: \"kubernetes.io/projected/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-kube-api-access-55nfg\") pod \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.465668 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-dns-svc\") pod \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\" (UID: \"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.467360 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-config" (OuterVolumeSpecName: "config") pod "a8a6341a-9926-4695-9c8d-ffe8d2cbb52d" (UID: "a8a6341a-9926-4695-9c8d-ffe8d2cbb52d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.471060 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a8a6341a-9926-4695-9c8d-ffe8d2cbb52d" (UID: "a8a6341a-9926-4695-9c8d-ffe8d2cbb52d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.494439 4708 generic.go:334] "Generic (PLEG): container finished" podID="ae8ae876-ebb6-4de9-bacc-1efece3d20a0" containerID="f10e8f69b87946145636ad505915d1f4f02d31bf0c709914f48cb1918f55cf6c" exitCode=0 Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.494681 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" event={"ID":"ae8ae876-ebb6-4de9-bacc-1efece3d20a0","Type":"ContainerDied","Data":"f10e8f69b87946145636ad505915d1f4f02d31bf0c709914f48cb1918f55cf6c"} Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.502151 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-kube-api-access-55nfg" (OuterVolumeSpecName: "kube-api-access-55nfg") pod "a8a6341a-9926-4695-9c8d-ffe8d2cbb52d" (UID: "a8a6341a-9926-4695-9c8d-ffe8d2cbb52d"). InnerVolumeSpecName "kube-api-access-55nfg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.508750 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"6cc07076-e637-443a-85c1-7b72beeb6cc7","Type":"ContainerStarted","Data":"7e89d051f4034252d72ed9c33d1002ccb945eaf9508c15fdc4b35ecd74d40f89"} Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.521457 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-ingester-0" event={"ID":"238aef54-b0dd-495b-a5f8-66cc43b12088","Type":"ContainerStarted","Data":"65f7627b96fc816428a06dda5e9ba592429d1841db4f0d3c27d02eb28e2a8d5a"} Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.521754 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.539410 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.539546 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-2dpsz" event={"ID":"a8a6341a-9926-4695-9c8d-ffe8d2cbb52d","Type":"ContainerDied","Data":"2c21e96b85589ef33cf0fb691be4b9f52446404f9ec3cd30581ccbffbbee3648"} Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.568731 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.568826 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.568864 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-config\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.568958 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8ls9\" (UniqueName: \"kubernetes.io/projected/3eca4c12-77bb-4e32-9738-1d29f1d2174a-kube-api-access-z8ls9\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.568979 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-dns-svc\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.569025 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.569037 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.569047 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55nfg\" (UniqueName: \"kubernetes.io/projected/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d-kube-api-access-55nfg\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.596252 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-lokistack-index-gateway-0" event={"ID":"c56ea2d3-2905-47bd-b819-41705a3b858f","Type":"ContainerStarted","Data":"c457b54b7cc9f406823470cdd0bed76cf3fdb51819b39373f9e086ce4b2e22f5"} Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.598284 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.628791 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq" event={"ID":"2410b28c-0b9c-4da0-826a-bcbbab63a292","Type":"ContainerStarted","Data":"17187d62b0ebfb04133fe1b6dabffd02b88313ed81a19f180f1d6c3b14cf19a9"} Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.629218 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.637786 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.656363 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.656487 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671088 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8ls9\" (UniqueName: \"kubernetes.io/projected/3eca4c12-77bb-4e32-9738-1d29f1d2174a-kube-api-access-z8ls9\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671129 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-dns-svc\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671182 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671257 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671284 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-config\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671275 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671684 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671817 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.671951 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-f98sl" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.676192 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-config\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.676887 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-sb\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.676930 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-dns-svc\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.678585 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-ingester-0" podStartSLOduration=32.489804881 podStartE2EDuration="42.678550695s" podCreationTimestamp="2026-02-27 17:14:21 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.51933699 +0000 UTC m=+1289.035134567" lastFinishedPulling="2026-02-27 17:15:00.708082794 +0000 UTC m=+1299.223880381" observedRunningTime="2026-02-27 17:15:03.621042664 +0000 UTC m=+1302.136840251" watchObservedRunningTime="2026-02-27 17:15:03.678550695 +0000 UTC m=+1302.194348282" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.698084 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.721470 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.740680 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8ls9\" (UniqueName: \"kubernetes.io/projected/3eca4c12-77bb-4e32-9738-1d29f1d2174a-kube-api-access-z8ls9\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.741645 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-lokistack-index-gateway-0" podStartSLOduration=32.509204651 podStartE2EDuration="42.741625745s" podCreationTimestamp="2026-02-27 17:14:21 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.519634558 +0000 UTC m=+1289.035432145" lastFinishedPulling="2026-02-27 17:15:00.752055652 +0000 UTC m=+1299.267853239" observedRunningTime="2026-02-27 17:15:03.713767075 +0000 UTC m=+1302.229564662" watchObservedRunningTime="2026-02-27 17:15:03.741625745 +0000 UTC m=+1302.257423332" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.759448 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6zlsq" podStartSLOduration=36.977405955 podStartE2EDuration="47.75942432s" podCreationTimestamp="2026-02-27 17:14:16 +0000 UTC" firstStartedPulling="2026-02-27 17:14:49.949290118 +0000 UTC m=+1288.465087695" lastFinishedPulling="2026-02-27 17:15:00.731308473 +0000 UTC m=+1299.247106060" observedRunningTime="2026-02-27 17:15:03.740514053 +0000 UTC m=+1302.256311640" watchObservedRunningTime="2026-02-27 17:15:03.75942432 +0000 UTC m=+1302.275221907" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.773730 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-config\") pod \"0e525119-22e4-4879-bec2-b7d830c00fcf\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.778011 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f59t\" (UniqueName: \"kubernetes.io/projected/0e525119-22e4-4879-bec2-b7d830c00fcf-kube-api-access-4f59t\") pod \"0e525119-22e4-4879-bec2-b7d830c00fcf\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.778038 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-config\") pod \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.778168 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-dns-svc\") pod \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.778191 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-dns-svc\") pod \"0e525119-22e4-4879-bec2-b7d830c00fcf\" (UID: \"0e525119-22e4-4879-bec2-b7d830c00fcf\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.778272 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw4mx\" (UniqueName: \"kubernetes.io/projected/602b65d5-0ce1-4b43-b5dd-54df670c7a22-kube-api-access-dw4mx\") pod \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.778543 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a41f59-1fee-425c-a42a-de40caa66c0f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.778618 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvdwt\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-kube-api-access-fvdwt\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.775507 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-config" (OuterVolumeSpecName: "config") pod "0e525119-22e4-4879-bec2-b7d830c00fcf" (UID: "0e525119-22e4-4879-bec2-b7d830c00fcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.781128 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mtzfr"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.782723 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-config" (OuterVolumeSpecName: "config") pod "602b65d5-0ce1-4b43-b5dd-54df670c7a22" (UID: "602b65d5-0ce1-4b43-b5dd-54df670c7a22"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.782904 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "0e525119-22e4-4879-bec2-b7d830c00fcf" (UID: "0e525119-22e4-4879-bec2-b7d830c00fcf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.787607 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "602b65d5-0ce1-4b43-b5dd-54df670c7a22" (UID: "602b65d5-0ce1-4b43-b5dd-54df670c7a22"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.788371 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e8a41f59-1fee-425c-a42a-de40caa66c0f-lock\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.788414 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.788515 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a41f59-1fee-425c-a42a-de40caa66c0f-cache\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.788612 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.806255 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.806287 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.806299 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.806308 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/0e525119-22e4-4879-bec2-b7d830c00fcf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.814523 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e525119-22e4-4879-bec2-b7d830c00fcf-kube-api-access-4f59t" (OuterVolumeSpecName: "kube-api-access-4f59t") pod "0e525119-22e4-4879-bec2-b7d830c00fcf" (UID: "0e525119-22e4-4879-bec2-b7d830c00fcf"). InnerVolumeSpecName "kube-api-access-4f59t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.815279 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/602b65d5-0ce1-4b43-b5dd-54df670c7a22-kube-api-access-dw4mx" (OuterVolumeSpecName: "kube-api-access-dw4mx") pod "602b65d5-0ce1-4b43-b5dd-54df670c7a22" (UID: "602b65d5-0ce1-4b43-b5dd-54df670c7a22"). InnerVolumeSpecName "kube-api-access-dw4mx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.863754 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dpsz"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.869596 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-2dpsz"] Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908058 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a41f59-1fee-425c-a42a-de40caa66c0f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908108 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fvdwt\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-kube-api-access-fvdwt\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908180 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e8a41f59-1fee-425c-a42a-de40caa66c0f-lock\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908199 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908232 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a41f59-1fee-425c-a42a-de40caa66c0f-cache\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908258 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908317 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw4mx\" (UniqueName: \"kubernetes.io/projected/602b65d5-0ce1-4b43-b5dd-54df670c7a22-kube-api-access-dw4mx\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.908328 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f59t\" (UniqueName: \"kubernetes.io/projected/0e525119-22e4-4879-bec2-b7d830c00fcf-kube-api-access-4f59t\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:03 crc kubenswrapper[4708]: E0227 17:15:03.909332 4708 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 17:15:03 crc kubenswrapper[4708]: E0227 17:15:03.909346 4708 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 17:15:03 crc kubenswrapper[4708]: E0227 17:15:03.909391 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift podName:e8a41f59-1fee-425c-a42a-de40caa66c0f nodeName:}" failed. No retries permitted until 2026-02-27 17:15:04.409372774 +0000 UTC m=+1302.925170361 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift") pod "swift-storage-0" (UID: "e8a41f59-1fee-425c-a42a-de40caa66c0f") : configmap "swift-ring-files" not found Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.909941 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/e8a41f59-1fee-425c-a42a-de40caa66c0f-lock\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.910166 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/e8a41f59-1fee-425c-a42a-de40caa66c0f-cache\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.930252 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a41f59-1fee-425c-a42a-de40caa66c0f-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.937400 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.937425 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/74cc5e09bb5e6e58b02efc6e37c7e4437bdeb2741fa68223f962635be1d86c64/globalmount\"" pod="openstack/swift-storage-0" Feb 27 17:15:03 crc kubenswrapper[4708]: I0227 17:15:03.945265 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fvdwt\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-kube-api-access-fvdwt\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.003480 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-590b17e3-bb09-4a70-80c7-fa42161114eb\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.079656 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-b7fvz"] Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.094830 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.095661 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-nb\") pod \"dnsmasq-dns-698758b865-svwxj\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.108520 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-ovsdbserver-nb\") pod \"dnsmasq-dns-74f6f696b9-5psm7\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.109922 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.214200 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-ovsdbserver-nb\") pod \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\" (UID: \"602b65d5-0ce1-4b43-b5dd-54df670c7a22\") " Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.215587 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "602b65d5-0ce1-4b43-b5dd-54df670c7a22" (UID: "602b65d5-0ce1-4b43-b5dd-54df670c7a22"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.241293 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8a6341a-9926-4695-9c8d-ffe8d2cbb52d" path="/var/lib/kubelet/pods/a8a6341a-9926-4695-9c8d-ffe8d2cbb52d/volumes" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.316411 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/602b65d5-0ce1-4b43-b5dd-54df670c7a22-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.418325 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:04 crc kubenswrapper[4708]: E0227 17:15:04.418536 4708 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 17:15:04 crc kubenswrapper[4708]: E0227 17:15:04.418569 4708 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 17:15:04 crc kubenswrapper[4708]: E0227 17:15:04.418629 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift podName:e8a41f59-1fee-425c-a42a-de40caa66c0f nodeName:}" failed. No retries permitted until 2026-02-27 17:15:05.418611151 +0000 UTC m=+1303.934408738 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift") pod "swift-storage-0" (UID: "e8a41f59-1fee-425c-a42a-de40caa66c0f") : configmap "swift-ring-files" not found Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.571560 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-698758b865-svwxj"] Feb 27 17:15:04 crc kubenswrapper[4708]: W0227 17:15:04.577527 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3eca4c12_77bb_4e32_9738_1d29f1d2174a.slice/crio-858bf0da8c7ad137cfa66d424ee1113c0e2c8239546ea54c4cb0da4d71673a60 WatchSource:0}: Error finding container 858bf0da8c7ad137cfa66d424ee1113c0e2c8239546ea54c4cb0da4d71673a60: Status 404 returned error can't find the container with id 858bf0da8c7ad137cfa66d424ee1113c0e2c8239546ea54c4cb0da4d71673a60 Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.639929 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b7fvz" event={"ID":"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d","Type":"ContainerStarted","Data":"89b20b2f5415b26c4f33bc7322af38b810d1d4fdcf67aebecc980bce5e1226e4"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.641694 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eb2fe191-cb57-46a6-9797-c9890640ff74","Type":"ContainerStarted","Data":"ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.642996 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-svwxj" event={"ID":"3eca4c12-77bb-4e32-9738-1d29f1d2174a","Type":"ContainerStarted","Data":"858bf0da8c7ad137cfa66d424ee1113c0e2c8239546ea54c4cb0da4d71673a60"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.645283 4708 generic.go:334] "Generic (PLEG): container finished" podID="cdfec2dc-369d-405a-a7c4-95c4b5a08d8a" containerID="1a529ad4063e884e4804b1b948c013f2b72497f63177270bb6336171b52db845" exitCode=0 Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.645368 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k2qzb" event={"ID":"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a","Type":"ContainerDied","Data":"1a529ad4063e884e4804b1b948c013f2b72497f63177270bb6336171b52db845"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.647810 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32b89444-fadf-43c8-b552-e5071fc91481","Type":"ContainerStarted","Data":"8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.649334 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" event={"ID":"c3bed1ce-0365-4baa-9f88-d3052d1f86db","Type":"ContainerStarted","Data":"9debb7a3d1d2aa6e84abda68ef5efe4e87e5d4d3425f5e341f2786f655d06cab"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.651621 4708 generic.go:334] "Generic (PLEG): container finished" podID="c129cc00-13ca-4502-aa1b-866133b164a9" containerID="2e7a064c26d17a9d34a9bfa4a83396738620cb57acbf204cdfae3a3489c41b06" exitCode=0 Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.651749 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerDied","Data":"2e7a064c26d17a9d34a9bfa4a83396738620cb57acbf204cdfae3a3489c41b06"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.653751 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" event={"ID":"0e525119-22e4-4879-bec2-b7d830c00fcf","Type":"ContainerDied","Data":"c6511cc7bf105090bd144c9d3c654a314b5179a76cb7bc9699d3c9615f375b2a"} Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.653881 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-g6vsk" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.654109 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-74f6f696b9-5psm7" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.654795 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-6zlsq" Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.808943 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-5psm7"] Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.816994 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-74f6f696b9-5psm7"] Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.839081 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g6vsk"] Feb 27 17:15:04 crc kubenswrapper[4708]: I0227 17:15:04.850740 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-g6vsk"] Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.089564 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.236922 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-secret-volume\") pod \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.237021 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-config-volume\") pod \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.237117 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r7s9h\" (UniqueName: \"kubernetes.io/projected/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-kube-api-access-r7s9h\") pod \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\" (UID: \"ae8ae876-ebb6-4de9-bacc-1efece3d20a0\") " Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.238743 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae8ae876-ebb6-4de9-bacc-1efece3d20a0" (UID: "ae8ae876-ebb6-4de9-bacc-1efece3d20a0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.243006 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae8ae876-ebb6-4de9-bacc-1efece3d20a0" (UID: "ae8ae876-ebb6-4de9-bacc-1efece3d20a0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.246181 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-kube-api-access-r7s9h" (OuterVolumeSpecName: "kube-api-access-r7s9h") pod "ae8ae876-ebb6-4de9-bacc-1efece3d20a0" (UID: "ae8ae876-ebb6-4de9-bacc-1efece3d20a0"). InnerVolumeSpecName "kube-api-access-r7s9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.338805 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.338834 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.338860 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r7s9h\" (UniqueName: \"kubernetes.io/projected/ae8ae876-ebb6-4de9-bacc-1efece3d20a0-kube-api-access-r7s9h\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.440011 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:05 crc kubenswrapper[4708]: E0227 17:15:05.440222 4708 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 17:15:05 crc kubenswrapper[4708]: E0227 17:15:05.440248 4708 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 17:15:05 crc kubenswrapper[4708]: E0227 17:15:05.440310 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift podName:e8a41f59-1fee-425c-a42a-de40caa66c0f nodeName:}" failed. No retries permitted until 2026-02-27 17:15:07.440291555 +0000 UTC m=+1305.956089132 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift") pod "swift-storage-0" (UID: "e8a41f59-1fee-425c-a42a-de40caa66c0f") : configmap "swift-ring-files" not found Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.663046 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.663043 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt" event={"ID":"ae8ae876-ebb6-4de9-bacc-1efece3d20a0","Type":"ContainerDied","Data":"a5a84c6573f0dd5d26d4db28e27643e4e275fd89965f2e68eed845f4c1a35d21"} Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.663544 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5a84c6573f0dd5d26d4db28e27643e4e275fd89965f2e68eed845f4c1a35d21" Feb 27 17:15:05 crc kubenswrapper[4708]: I0227 17:15:05.667150 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k2qzb" event={"ID":"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a","Type":"ContainerStarted","Data":"de6c89aee713365c9f520f622f76c26b6ada3c224ee07e4ea998a42b85ec0d22"} Feb 27 17:15:06 crc kubenswrapper[4708]: I0227 17:15:06.239896 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e525119-22e4-4879-bec2-b7d830c00fcf" path="/var/lib/kubelet/pods/0e525119-22e4-4879-bec2-b7d830c00fcf/volumes" Feb 27 17:15:06 crc kubenswrapper[4708]: I0227 17:15:06.240367 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="602b65d5-0ce1-4b43-b5dd-54df670c7a22" path="/var/lib/kubelet/pods/602b65d5-0ce1-4b43-b5dd-54df670c7a22/volumes" Feb 27 17:15:06 crc kubenswrapper[4708]: I0227 17:15:06.677372 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-k2qzb" event={"ID":"cdfec2dc-369d-405a-a7c4-95c4b5a08d8a","Type":"ContainerStarted","Data":"2c8bef32aa1534332552424b10bfb927dd6cc64a52475cd049f4d399bc39f091"} Feb 27 17:15:06 crc kubenswrapper[4708]: I0227 17:15:06.678685 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:15:06 crc kubenswrapper[4708]: I0227 17:15:06.678715 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.480594 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:07 crc kubenswrapper[4708]: E0227 17:15:07.480798 4708 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 17:15:07 crc kubenswrapper[4708]: E0227 17:15:07.480831 4708 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 17:15:07 crc kubenswrapper[4708]: E0227 17:15:07.480904 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift podName:e8a41f59-1fee-425c-a42a-de40caa66c0f nodeName:}" failed. No retries permitted until 2026-02-27 17:15:11.480886986 +0000 UTC m=+1309.996684573 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift") pod "swift-storage-0" (UID: "e8a41f59-1fee-425c-a42a-de40caa66c0f") : configmap "swift-ring-files" not found Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.517727 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-k2qzb" podStartSLOduration=42.583338582 podStartE2EDuration="51.517712571s" podCreationTimestamp="2026-02-27 17:14:16 +0000 UTC" firstStartedPulling="2026-02-27 17:14:51.373735358 +0000 UTC m=+1289.889532945" lastFinishedPulling="2026-02-27 17:15:00.308109347 +0000 UTC m=+1298.823906934" observedRunningTime="2026-02-27 17:15:06.737388213 +0000 UTC m=+1305.253185820" watchObservedRunningTime="2026-02-27 17:15:07.517712571 +0000 UTC m=+1306.033510158" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.518159 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-wq4dg"] Feb 27 17:15:07 crc kubenswrapper[4708]: E0227 17:15:07.518492 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae8ae876-ebb6-4de9-bacc-1efece3d20a0" containerName="collect-profiles" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.518507 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae8ae876-ebb6-4de9-bacc-1efece3d20a0" containerName="collect-profiles" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.518654 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae8ae876-ebb6-4de9-bacc-1efece3d20a0" containerName="collect-profiles" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.519477 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.522491 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.522735 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.532302 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.556825 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wq4dg"] Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.685041 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-swiftconf\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.685162 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-dispersionconf\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.685349 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-combined-ca-bundle\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.685586 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/487e829b-b6b1-4c03-8c90-f35a10aee7a2-etc-swift\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.685773 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7prrs\" (UniqueName: \"kubernetes.io/projected/487e829b-b6b1-4c03-8c90-f35a10aee7a2-kube-api-access-7prrs\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.685888 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-scripts\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.685932 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-ring-data-devices\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.787762 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/487e829b-b6b1-4c03-8c90-f35a10aee7a2-etc-swift\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.787843 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7prrs\" (UniqueName: \"kubernetes.io/projected/487e829b-b6b1-4c03-8c90-f35a10aee7a2-kube-api-access-7prrs\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.787899 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-scripts\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.787932 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-ring-data-devices\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.787983 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-swiftconf\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.788057 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-dispersionconf\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.788093 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-combined-ca-bundle\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.788821 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-scripts\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.789165 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/487e829b-b6b1-4c03-8c90-f35a10aee7a2-etc-swift\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.789260 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-ring-data-devices\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.793499 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-dispersionconf\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.796971 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-swiftconf\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.800359 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-combined-ca-bundle\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.807063 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7prrs\" (UniqueName: \"kubernetes.io/projected/487e829b-b6b1-4c03-8c90-f35a10aee7a2-kube-api-access-7prrs\") pod \"swift-ring-rebalance-wq4dg\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:07 crc kubenswrapper[4708]: I0227 17:15:07.870917 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:09 crc kubenswrapper[4708]: I0227 17:15:09.240777 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-wq4dg"] Feb 27 17:15:09 crc kubenswrapper[4708]: I0227 17:15:09.706290 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wq4dg" event={"ID":"487e829b-b6b1-4c03-8c90-f35a10aee7a2","Type":"ContainerStarted","Data":"424b3cf59cd7e35ccb3e10d9ca8245de3e7a83cdb341051b289c59eea4dec243"} Feb 27 17:15:10 crc kubenswrapper[4708]: I0227 17:15:10.093518 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Feb 27 17:15:10 crc kubenswrapper[4708]: I0227 17:15:10.093922 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Feb 27 17:15:10 crc kubenswrapper[4708]: I0227 17:15:10.718100 4708 generic.go:334] "Generic (PLEG): container finished" podID="6cc07076-e637-443a-85c1-7b72beeb6cc7" containerID="7e89d051f4034252d72ed9c33d1002ccb945eaf9508c15fdc4b35ecd74d40f89" exitCode=0 Feb 27 17:15:10 crc kubenswrapper[4708]: I0227 17:15:10.718186 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"6cc07076-e637-443a-85c1-7b72beeb6cc7","Type":"ContainerDied","Data":"7e89d051f4034252d72ed9c33d1002ccb945eaf9508c15fdc4b35ecd74d40f89"} Feb 27 17:15:10 crc kubenswrapper[4708]: I0227 17:15:10.719978 4708 generic.go:334] "Generic (PLEG): container finished" podID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerID="67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f" exitCode=0 Feb 27 17:15:10 crc kubenswrapper[4708]: I0227 17:15:10.720007 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" event={"ID":"c3bed1ce-0365-4baa-9f88-d3052d1f86db","Type":"ContainerDied","Data":"67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f"} Feb 27 17:15:11 crc kubenswrapper[4708]: I0227 17:15:11.560980 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:11 crc kubenswrapper[4708]: E0227 17:15:11.561240 4708 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 17:15:11 crc kubenswrapper[4708]: E0227 17:15:11.561643 4708 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 17:15:11 crc kubenswrapper[4708]: E0227 17:15:11.561754 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift podName:e8a41f59-1fee-425c-a42a-de40caa66c0f nodeName:}" failed. No retries permitted until 2026-02-27 17:15:19.561717838 +0000 UTC m=+1318.077515465 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift") pod "swift-storage-0" (UID: "e8a41f59-1fee-425c-a42a-de40caa66c0f") : configmap "swift-ring-files" not found Feb 27 17:15:16 crc kubenswrapper[4708]: I0227 17:15:15.999513 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Feb 27 17:15:16 crc kubenswrapper[4708]: I0227 17:15:16.142524 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.795421 4708 generic.go:334] "Generic (PLEG): container finished" podID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerID="c59b96556c4590204aaef72112417d7abd8bd28ea7832b1b131569c535cf744f" exitCode=0 Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.796273 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-svwxj" event={"ID":"3eca4c12-77bb-4e32-9738-1d29f1d2174a","Type":"ContainerDied","Data":"c59b96556c4590204aaef72112417d7abd8bd28ea7832b1b131569c535cf744f"} Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.803404 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e2314c35-5338-4db2-a705-53cbc737f9a1","Type":"ContainerStarted","Data":"de0f984a572dcddeee7e002593500687c873145260422b4675bdb031378b37e9"} Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.803445 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"e2314c35-5338-4db2-a705-53cbc737f9a1","Type":"ContainerStarted","Data":"2570b4297a503c1b58feab3e0425cae437d2f1f29b144e1a6db0fdad8d6be1bf"} Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.806271 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" event={"ID":"c3bed1ce-0365-4baa-9f88-d3052d1f86db","Type":"ContainerStarted","Data":"835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786"} Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.807178 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.811915 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"9f2235b7-8f1a-4510-8ca8-ed784bf1aec1","Type":"ContainerStarted","Data":"979ef91c99a1e816a10ac2fca3c37ea68fc8b9794e1354d3a1ea9f5a94ab3298"} Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.813748 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-b7fvz" event={"ID":"7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d","Type":"ContainerStarted","Data":"2fd0dd644871fd14c4237d6c08c0951767595d135ffa917c98dd93f84bf543b7"} Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.850884 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-qt4x7"] Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.852935 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.863856 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qt4x7"] Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.877705 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.931266 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=32.341654622 podStartE2EDuration="59.931250189s" podCreationTimestamp="2026-02-27 17:14:19 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.4473279 +0000 UTC m=+1288.963125487" lastFinishedPulling="2026-02-27 17:15:18.036923407 +0000 UTC m=+1316.552721054" observedRunningTime="2026-02-27 17:15:18.930333843 +0000 UTC m=+1317.446131430" watchObservedRunningTime="2026-02-27 17:15:18.931250189 +0000 UTC m=+1317.447047776" Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.933070 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" podStartSLOduration=12.067803184 podStartE2EDuration="16.933061901s" podCreationTimestamp="2026-02-27 17:15:02 +0000 UTC" firstStartedPulling="2026-02-27 17:15:03.854306961 +0000 UTC m=+1302.370104548" lastFinishedPulling="2026-02-27 17:15:08.719565668 +0000 UTC m=+1307.235363265" observedRunningTime="2026-02-27 17:15:18.906395224 +0000 UTC m=+1317.422192811" watchObservedRunningTime="2026-02-27 17:15:18.933061901 +0000 UTC m=+1317.448859488" Feb 27 17:15:18 crc kubenswrapper[4708]: I0227 17:15:18.959467 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-b7fvz" podStartSLOduration=2.993047747 podStartE2EDuration="16.959448779s" podCreationTimestamp="2026-02-27 17:15:02 +0000 UTC" firstStartedPulling="2026-02-27 17:15:04.074328613 +0000 UTC m=+1302.590126200" lastFinishedPulling="2026-02-27 17:15:18.040729635 +0000 UTC m=+1316.556527232" observedRunningTime="2026-02-27 17:15:18.945578766 +0000 UTC m=+1317.461376353" watchObservedRunningTime="2026-02-27 17:15:18.959448779 +0000 UTC m=+1317.475246366" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.026212 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qscz7\" (UniqueName: \"kubernetes.io/projected/b484936d-0feb-4107-a28a-2e0c7ac7e267-kube-api-access-qscz7\") pod \"root-account-create-update-qt4x7\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.026288 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b484936d-0feb-4107-a28a-2e0c7ac7e267-operator-scripts\") pod \"root-account-create-update-qt4x7\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.132498 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qscz7\" (UniqueName: \"kubernetes.io/projected/b484936d-0feb-4107-a28a-2e0c7ac7e267-kube-api-access-qscz7\") pod \"root-account-create-update-qt4x7\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.132569 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b484936d-0feb-4107-a28a-2e0c7ac7e267-operator-scripts\") pod \"root-account-create-update-qt4x7\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.134525 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b484936d-0feb-4107-a28a-2e0c7ac7e267-operator-scripts\") pod \"root-account-create-update-qt4x7\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.161504 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qscz7\" (UniqueName: \"kubernetes.io/projected/b484936d-0feb-4107-a28a-2e0c7ac7e267-kube-api-access-qscz7\") pod \"root-account-create-update-qt4x7\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.314807 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.642509 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:19 crc kubenswrapper[4708]: E0227 17:15:19.642718 4708 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 17:15:19 crc kubenswrapper[4708]: E0227 17:15:19.642732 4708 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 17:15:19 crc kubenswrapper[4708]: E0227 17:15:19.642776 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift podName:e8a41f59-1fee-425c-a42a-de40caa66c0f nodeName:}" failed. No retries permitted until 2026-02-27 17:15:35.642760644 +0000 UTC m=+1334.158558231 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift") pod "swift-storage-0" (UID: "e8a41f59-1fee-425c-a42a-de40caa66c0f") : configmap "swift-ring-files" not found Feb 27 17:15:19 crc kubenswrapper[4708]: I0227 17:15:19.849659 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=37.465638224 podStartE2EDuration="1m4.849643903s" podCreationTimestamp="2026-02-27 17:14:15 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.555516825 +0000 UTC m=+1289.071314412" lastFinishedPulling="2026-02-27 17:15:17.939522504 +0000 UTC m=+1316.455320091" observedRunningTime="2026-02-27 17:15:19.844815066 +0000 UTC m=+1318.360612653" watchObservedRunningTime="2026-02-27 17:15:19.849643903 +0000 UTC m=+1318.365441490" Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.524546 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.589722 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.589875 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.642557 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.839630 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-svwxj" event={"ID":"3eca4c12-77bb-4e32-9738-1d29f1d2174a","Type":"ContainerStarted","Data":"4ef9315f37c4e43eb2653e30684c7f05d89304dfd469839b4de9cc866ad329d4"} Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.840971 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.866472 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-698758b865-svwxj" podStartSLOduration=4.401590447 podStartE2EDuration="17.86645493s" podCreationTimestamp="2026-02-27 17:15:03 +0000 UTC" firstStartedPulling="2026-02-27 17:15:04.580657798 +0000 UTC m=+1303.096455385" lastFinishedPulling="2026-02-27 17:15:18.045522281 +0000 UTC m=+1316.561319868" observedRunningTime="2026-02-27 17:15:20.861817729 +0000 UTC m=+1319.377615316" watchObservedRunningTime="2026-02-27 17:15:20.86645493 +0000 UTC m=+1319.382252527" Feb 27 17:15:20 crc kubenswrapper[4708]: I0227 17:15:20.907633 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Feb 27 17:15:21 crc kubenswrapper[4708]: I0227 17:15:21.850573 4708 generic.go:334] "Generic (PLEG): container finished" podID="6f6f6892-d9d6-4f71-bc65-8e47c15bddc1" containerID="01c295ff638a2c471b2f5aa983b5d0fc6687c6bd3119ecf7efc71cde8eee6dd2" exitCode=0 Feb 27 17:15:21 crc kubenswrapper[4708]: I0227 17:15:21.850658 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1","Type":"ContainerDied","Data":"01c295ff638a2c471b2f5aa983b5d0fc6687c6bd3119ecf7efc71cde8eee6dd2"} Feb 27 17:15:22 crc kubenswrapper[4708]: I0227 17:15:22.047642 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-distributor-585d9bcbc-xjzz8" Feb 27 17:15:22 crc kubenswrapper[4708]: I0227 17:15:22.356963 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26" Feb 27 17:15:22 crc kubenswrapper[4708]: I0227 17:15:22.505760 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-querier-58c84b5844-wb4dk" Feb 27 17:15:22 crc kubenswrapper[4708]: I0227 17:15:22.524495 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.200906 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="238aef54-b0dd-495b-a5f8-66cc43b12088" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.317757 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-compactor-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.358547 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-index-gateway-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.587132 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.639148 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.837742 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.839471 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.843739 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-qpbvp" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.844064 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.844121 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.844174 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.872017 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.958453 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xx2k\" (UniqueName: \"kubernetes.io/projected/d3d398d5-587b-48e8-b90b-a3e511311982-kube-api-access-8xx2k\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.958502 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d398d5-587b-48e8-b90b-a3e511311982-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.958566 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.958659 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d398d5-587b-48e8-b90b-a3e511311982-scripts\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.958687 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d398d5-587b-48e8-b90b-a3e511311982-config\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.958704 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:23 crc kubenswrapper[4708]: I0227 17:15:23.958728 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.060739 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.060893 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xx2k\" (UniqueName: \"kubernetes.io/projected/d3d398d5-587b-48e8-b90b-a3e511311982-kube-api-access-8xx2k\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.060935 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d398d5-587b-48e8-b90b-a3e511311982-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.060992 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.061022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d398d5-587b-48e8-b90b-a3e511311982-scripts\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.061040 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d398d5-587b-48e8-b90b-a3e511311982-config\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.061060 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.061595 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/d3d398d5-587b-48e8-b90b-a3e511311982-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.062400 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/d3d398d5-587b-48e8-b90b-a3e511311982-scripts\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.062736 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3d398d5-587b-48e8-b90b-a3e511311982-config\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.069268 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.070296 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.071264 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3d398d5-587b-48e8-b90b-a3e511311982-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.097572 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xx2k\" (UniqueName: \"kubernetes.io/projected/d3d398d5-587b-48e8-b90b-a3e511311982-kube-api-access-8xx2k\") pod \"ovn-northd-0\" (UID: \"d3d398d5-587b-48e8-b90b-a3e511311982\") " pod="openstack/ovn-northd-0" Feb 27 17:15:24 crc kubenswrapper[4708]: I0227 17:15:24.165022 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Feb 27 17:15:26 crc kubenswrapper[4708]: I0227 17:15:26.241455 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-qt4x7"] Feb 27 17:15:27 crc kubenswrapper[4708]: I0227 17:15:27.829378 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:29 crc kubenswrapper[4708]: I0227 17:15:29.112192 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:15:29 crc kubenswrapper[4708]: I0227 17:15:29.198076 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mtzfr"] Feb 27 17:15:29 crc kubenswrapper[4708]: I0227 17:15:29.198324 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" podUID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerName="dnsmasq-dns" containerID="cri-o://835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786" gracePeriod=10 Feb 27 17:15:30 crc kubenswrapper[4708]: W0227 17:15:30.049061 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb484936d_0feb_4107_a28a_2e0c7ac7e267.slice/crio-998bf07cbec2d4466053e569831e4e5b6507019ccf303d5ee9aa945753ed7eb4 WatchSource:0}: Error finding container 998bf07cbec2d4466053e569831e4e5b6507019ccf303d5ee9aa945753ed7eb4: Status 404 returned error can't find the container with id 998bf07cbec2d4466053e569831e4e5b6507019ccf303d5ee9aa945753ed7eb4 Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.095320 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.440982 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.584994 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kvnmv\" (UniqueName: \"kubernetes.io/projected/c3bed1ce-0365-4baa-9f88-d3052d1f86db-kube-api-access-kvnmv\") pod \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.585321 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-dns-svc\") pod \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.585372 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-config\") pod \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\" (UID: \"c3bed1ce-0365-4baa-9f88-d3052d1f86db\") " Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.590732 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3bed1ce-0365-4baa-9f88-d3052d1f86db-kube-api-access-kvnmv" (OuterVolumeSpecName: "kube-api-access-kvnmv") pod "c3bed1ce-0365-4baa-9f88-d3052d1f86db" (UID: "c3bed1ce-0365-4baa-9f88-d3052d1f86db"). InnerVolumeSpecName "kube-api-access-kvnmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.619236 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Feb 27 17:15:30 crc kubenswrapper[4708]: W0227 17:15:30.623425 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd3d398d5_587b_48e8_b90b_a3e511311982.slice/crio-c1553636000125402752fdd9ef33e05c485502865a9d5bd21067fe13edf2da5f WatchSource:0}: Error finding container c1553636000125402752fdd9ef33e05c485502865a9d5bd21067fe13edf2da5f: Status 404 returned error can't find the container with id c1553636000125402752fdd9ef33e05c485502865a9d5bd21067fe13edf2da5f Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.636272 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c3bed1ce-0365-4baa-9f88-d3052d1f86db" (UID: "c3bed1ce-0365-4baa-9f88-d3052d1f86db"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.640596 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-config" (OuterVolumeSpecName: "config") pod "c3bed1ce-0365-4baa-9f88-d3052d1f86db" (UID: "c3bed1ce-0365-4baa-9f88-d3052d1f86db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.687627 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kvnmv\" (UniqueName: \"kubernetes.io/projected/c3bed1ce-0365-4baa-9f88-d3052d1f86db-kube-api-access-kvnmv\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.687657 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.687665 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3bed1ce-0365-4baa-9f88-d3052d1f86db-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.962073 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"6f6f6892-d9d6-4f71-bc65-8e47c15bddc1","Type":"ContainerStarted","Data":"b51d35896059af96f2f7f649882c75eb9b5086ae29bfb111596326e17ec35c67"} Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.967569 4708 generic.go:334] "Generic (PLEG): container finished" podID="b484936d-0feb-4107-a28a-2e0c7ac7e267" containerID="d7204ca821ac865e56198e30bcef5ebc1e063f8442a6f07e4b265d43695a0680" exitCode=0 Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.967691 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qt4x7" event={"ID":"b484936d-0feb-4107-a28a-2e0c7ac7e267","Type":"ContainerDied","Data":"d7204ca821ac865e56198e30bcef5ebc1e063f8442a6f07e4b265d43695a0680"} Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.967746 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qt4x7" event={"ID":"b484936d-0feb-4107-a28a-2e0c7ac7e267","Type":"ContainerStarted","Data":"998bf07cbec2d4466053e569831e4e5b6507019ccf303d5ee9aa945753ed7eb4"} Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.981033 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"6cc07076-e637-443a-85c1-7b72beeb6cc7","Type":"ContainerStarted","Data":"1e3d7a2c241308fa462653f6ec1c38cadc62db3ed65a2e6b157498950ab6e750"} Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.983870 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d398d5-587b-48e8-b90b-a3e511311982","Type":"ContainerStarted","Data":"c1553636000125402752fdd9ef33e05c485502865a9d5bd21067fe13edf2da5f"} Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.986385 4708 generic.go:334] "Generic (PLEG): container finished" podID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerID="835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786" exitCode=0 Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.986463 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" event={"ID":"c3bed1ce-0365-4baa-9f88-d3052d1f86db","Type":"ContainerDied","Data":"835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786"} Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.986494 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" event={"ID":"c3bed1ce-0365-4baa-9f88-d3052d1f86db","Type":"ContainerDied","Data":"9debb7a3d1d2aa6e84abda68ef5efe4e87e5d4d3425f5e341f2786f655d06cab"} Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.986516 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7cb5889db5-mtzfr" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.986526 4708 scope.go:117] "RemoveContainer" containerID="835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786" Feb 27 17:15:30 crc kubenswrapper[4708]: I0227 17:15:30.991244 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wq4dg" event={"ID":"487e829b-b6b1-4c03-8c90-f35a10aee7a2","Type":"ContainerStarted","Data":"40a1fd447ae94e4d97491dc9529bc2298c21f2d4472242b2f559e1561bc7497e"} Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.003950 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=-9223371952.850851 podStartE2EDuration="1m24.003925537s" podCreationTimestamp="2026-02-27 17:14:07 +0000 UTC" firstStartedPulling="2026-02-27 17:14:10.111128952 +0000 UTC m=+1248.626926539" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:31.00227126 +0000 UTC m=+1329.518068887" watchObservedRunningTime="2026-02-27 17:15:31.003925537 +0000 UTC m=+1329.519723164" Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.008227 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerStarted","Data":"3833339ae9a1512b80609665e99a753ad63e0b74dff9ef6306f93413e6a2d44e"} Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.054040 4708 scope.go:117] "RemoveContainer" containerID="67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f" Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.061087 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-wq4dg" podStartSLOduration=3.245150987 podStartE2EDuration="24.061062438s" podCreationTimestamp="2026-02-27 17:15:07 +0000 UTC" firstStartedPulling="2026-02-27 17:15:09.260462272 +0000 UTC m=+1307.776259859" lastFinishedPulling="2026-02-27 17:15:30.076373693 +0000 UTC m=+1328.592171310" observedRunningTime="2026-02-27 17:15:31.037353215 +0000 UTC m=+1329.553150822" watchObservedRunningTime="2026-02-27 17:15:31.061062438 +0000 UTC m=+1329.576860035" Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.100081 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mtzfr"] Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.103120 4708 scope.go:117] "RemoveContainer" containerID="835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786" Feb 27 17:15:31 crc kubenswrapper[4708]: E0227 17:15:31.103654 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786\": container with ID starting with 835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786 not found: ID does not exist" containerID="835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786" Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.103708 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786"} err="failed to get container status \"835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786\": rpc error: code = NotFound desc = could not find container \"835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786\": container with ID starting with 835d73dde6fc391422ddb0d3f4bc0e99815d2df2768c4fd436e231b86f8f9786 not found: ID does not exist" Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.103786 4708 scope.go:117] "RemoveContainer" containerID="67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f" Feb 27 17:15:31 crc kubenswrapper[4708]: E0227 17:15:31.104144 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f\": container with ID starting with 67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f not found: ID does not exist" containerID="67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f" Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.104196 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f"} err="failed to get container status \"67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f\": rpc error: code = NotFound desc = could not find container \"67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f\": container with ID starting with 67992c29947c026954256c07a6f33c9fd6945dad3e4ef7543f58d9cae377784f not found: ID does not exist" Feb 27 17:15:31 crc kubenswrapper[4708]: I0227 17:15:31.109320 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7cb5889db5-mtzfr"] Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.269243 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" path="/var/lib/kubelet/pods/c3bed1ce-0365-4baa-9f88-d3052d1f86db/volumes" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.414466 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.543949 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b484936d-0feb-4107-a28a-2e0c7ac7e267-operator-scripts\") pod \"b484936d-0feb-4107-a28a-2e0c7ac7e267\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.544078 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qscz7\" (UniqueName: \"kubernetes.io/projected/b484936d-0feb-4107-a28a-2e0c7ac7e267-kube-api-access-qscz7\") pod \"b484936d-0feb-4107-a28a-2e0c7ac7e267\" (UID: \"b484936d-0feb-4107-a28a-2e0c7ac7e267\") " Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.544663 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b484936d-0feb-4107-a28a-2e0c7ac7e267-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b484936d-0feb-4107-a28a-2e0c7ac7e267" (UID: "b484936d-0feb-4107-a28a-2e0c7ac7e267"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.550189 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b484936d-0feb-4107-a28a-2e0c7ac7e267-kube-api-access-qscz7" (OuterVolumeSpecName: "kube-api-access-qscz7") pod "b484936d-0feb-4107-a28a-2e0c7ac7e267" (UID: "b484936d-0feb-4107-a28a-2e0c7ac7e267"). InnerVolumeSpecName "kube-api-access-qscz7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.646135 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b484936d-0feb-4107-a28a-2e0c7ac7e267-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:32.646163 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qscz7\" (UniqueName: \"kubernetes.io/projected/b484936d-0feb-4107-a28a-2e0c7ac7e267-kube-api-access-qscz7\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.038191 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d398d5-587b-48e8-b90b-a3e511311982","Type":"ContainerStarted","Data":"fabd8b82da49f9132826826fc57b8a198c3907b4eeb8d89cce9983e69635a9b6"} Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.038411 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"d3d398d5-587b-48e8-b90b-a3e511311982","Type":"ContainerStarted","Data":"91b6763603fba595462d9a89991491a0fca33349ab1ee1c06568e6bb0d425f29"} Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.038934 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.050136 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-qt4x7" event={"ID":"b484936d-0feb-4107-a28a-2e0c7ac7e267","Type":"ContainerDied","Data":"998bf07cbec2d4466053e569831e4e5b6507019ccf303d5ee9aa945753ed7eb4"} Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.050214 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="998bf07cbec2d4466053e569831e4e5b6507019ccf303d5ee9aa945753ed7eb4" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.050214 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-qt4x7" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.194561 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="238aef54-b0dd-495b-a5f8-66cc43b12088" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 17:15:33 crc kubenswrapper[4708]: I0227 17:15:33.454054 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=9.23774694 podStartE2EDuration="10.454030946s" podCreationTimestamp="2026-02-27 17:15:23 +0000 UTC" firstStartedPulling="2026-02-27 17:15:30.626037456 +0000 UTC m=+1329.141835043" lastFinishedPulling="2026-02-27 17:15:31.842321472 +0000 UTC m=+1330.358119049" observedRunningTime="2026-02-27 17:15:33.060198003 +0000 UTC m=+1331.575995630" watchObservedRunningTime="2026-02-27 17:15:33.454030946 +0000 UTC m=+1331.969828563" Feb 27 17:15:34 crc kubenswrapper[4708]: I0227 17:15:34.073581 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/alertmanager-metric-storage-0" event={"ID":"6cc07076-e637-443a-85c1-7b72beeb6cc7","Type":"ContainerStarted","Data":"6522517c74480d2bcdd07e3e2f9a27051257944965267d9e5b1dd80f776f2494"} Feb 27 17:15:34 crc kubenswrapper[4708]: I0227 17:15:34.074033 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/alertmanager-metric-storage-0" Feb 27 17:15:34 crc kubenswrapper[4708]: I0227 17:15:34.078420 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/alertmanager-metric-storage-0" Feb 27 17:15:34 crc kubenswrapper[4708]: I0227 17:15:34.108835 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/alertmanager-metric-storage-0" podStartSLOduration=41.98063161 podStartE2EDuration="1m22.108807061s" podCreationTimestamp="2026-02-27 17:14:12 +0000 UTC" firstStartedPulling="2026-02-27 17:14:49.96256124 +0000 UTC m=+1288.478358827" lastFinishedPulling="2026-02-27 17:15:30.090736651 +0000 UTC m=+1328.606534278" observedRunningTime="2026-02-27 17:15:34.105489127 +0000 UTC m=+1332.621286774" watchObservedRunningTime="2026-02-27 17:15:34.108807061 +0000 UTC m=+1332.624604688" Feb 27 17:15:35 crc kubenswrapper[4708]: I0227 17:15:35.084163 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerStarted","Data":"cbca860b738855f082dafc10482664274da1c261c7c04647e10b6647a4499dee"} Feb 27 17:15:35 crc kubenswrapper[4708]: I0227 17:15:35.633498 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:15:35 crc kubenswrapper[4708]: I0227 17:15:35.633613 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:15:35 crc kubenswrapper[4708]: I0227 17:15:35.725321 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:15:35 crc kubenswrapper[4708]: E0227 17:15:35.725554 4708 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Feb 27 17:15:35 crc kubenswrapper[4708]: E0227 17:15:35.725591 4708 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Feb 27 17:15:35 crc kubenswrapper[4708]: E0227 17:15:35.725680 4708 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift podName:e8a41f59-1fee-425c-a42a-de40caa66c0f nodeName:}" failed. No retries permitted until 2026-02-27 17:16:07.725654121 +0000 UTC m=+1366.241451748 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift") pod "swift-storage-0" (UID: "e8a41f59-1fee-425c-a42a-de40caa66c0f") : configmap "swift-ring-files" not found Feb 27 17:15:36 crc kubenswrapper[4708]: I0227 17:15:36.096554 4708 generic.go:334] "Generic (PLEG): container finished" podID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerID="ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939" exitCode=0 Feb 27 17:15:36 crc kubenswrapper[4708]: I0227 17:15:36.096889 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eb2fe191-cb57-46a6-9797-c9890640ff74","Type":"ContainerDied","Data":"ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939"} Feb 27 17:15:36 crc kubenswrapper[4708]: I0227 17:15:36.116326 4708 generic.go:334] "Generic (PLEG): container finished" podID="32b89444-fadf-43c8-b552-e5071fc91481" containerID="8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8" exitCode=0 Feb 27 17:15:36 crc kubenswrapper[4708]: I0227 17:15:36.116499 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32b89444-fadf-43c8-b552-e5071fc91481","Type":"ContainerDied","Data":"8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8"} Feb 27 17:15:36 crc kubenswrapper[4708]: I0227 17:15:36.758453 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-6zlsq" podUID="2410b28c-0b9c-4da0-826a-bcbbab63a292" containerName="ovn-controller" probeResult="failure" output=< Feb 27 17:15:36 crc kubenswrapper[4708]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Feb 27 17:15:36 crc kubenswrapper[4708]: > Feb 27 17:15:36 crc kubenswrapper[4708]: I0227 17:15:36.776988 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:15:36 crc kubenswrapper[4708]: I0227 17:15:36.788577 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-k2qzb" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.018162 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6zlsq-config-dkfpj"] Feb 27 17:15:37 crc kubenswrapper[4708]: E0227 17:15:37.019295 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerName="init" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.019329 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerName="init" Feb 27 17:15:37 crc kubenswrapper[4708]: E0227 17:15:37.019358 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerName="dnsmasq-dns" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.019371 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerName="dnsmasq-dns" Feb 27 17:15:37 crc kubenswrapper[4708]: E0227 17:15:37.019419 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b484936d-0feb-4107-a28a-2e0c7ac7e267" containerName="mariadb-account-create-update" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.019433 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b484936d-0feb-4107-a28a-2e0c7ac7e267" containerName="mariadb-account-create-update" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.019744 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b484936d-0feb-4107-a28a-2e0c7ac7e267" containerName="mariadb-account-create-update" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.019786 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3bed1ce-0365-4baa-9f88-d3052d1f86db" containerName="dnsmasq-dns" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.021016 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.040491 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.047883 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6zlsq-config-dkfpj"] Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.127538 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32b89444-fadf-43c8-b552-e5071fc91481","Type":"ContainerStarted","Data":"1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7"} Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.128677 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.131924 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eb2fe191-cb57-46a6-9797-c9890640ff74","Type":"ContainerStarted","Data":"d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd"} Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.132467 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.160686 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=39.336136522 podStartE2EDuration="1m32.160665432s" podCreationTimestamp="2026-02-27 17:14:05 +0000 UTC" firstStartedPulling="2026-02-27 17:14:07.904786707 +0000 UTC m=+1246.420584294" lastFinishedPulling="2026-02-27 17:15:00.729315617 +0000 UTC m=+1299.245113204" observedRunningTime="2026-02-27 17:15:37.153400796 +0000 UTC m=+1335.669198383" watchObservedRunningTime="2026-02-27 17:15:37.160665432 +0000 UTC m=+1335.676463039" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.169127 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run-ovn\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.169185 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.169214 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-log-ovn\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.169315 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz2x6\" (UniqueName: \"kubernetes.io/projected/100cf8e9-b657-4140-82e0-bf9e976024cc-kube-api-access-pz2x6\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.169371 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-scripts\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.169432 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-additional-scripts\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.186951 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.454814075 podStartE2EDuration="1m32.186931087s" podCreationTimestamp="2026-02-27 17:14:05 +0000 UTC" firstStartedPulling="2026-02-27 17:14:08.198629439 +0000 UTC m=+1246.714427026" lastFinishedPulling="2026-02-27 17:15:00.930746451 +0000 UTC m=+1299.446544038" observedRunningTime="2026-02-27 17:15:37.178637422 +0000 UTC m=+1335.694435019" watchObservedRunningTime="2026-02-27 17:15:37.186931087 +0000 UTC m=+1335.702728684" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.270805 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-additional-scripts\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.271089 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run-ovn\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.271171 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.271236 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-log-ovn\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.271438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pz2x6\" (UniqueName: \"kubernetes.io/projected/100cf8e9-b657-4140-82e0-bf9e976024cc-kube-api-access-pz2x6\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.271548 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-scripts\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.271557 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-additional-scripts\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.272474 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-log-ovn\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.272479 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run-ovn\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.272533 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.273412 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-scripts\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.291987 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pz2x6\" (UniqueName: \"kubernetes.io/projected/100cf8e9-b657-4140-82e0-bf9e976024cc-kube-api-access-pz2x6\") pod \"ovn-controller-6zlsq-config-dkfpj\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.366865 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:37 crc kubenswrapper[4708]: I0227 17:15:37.854122 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6zlsq-config-dkfpj"] Feb 27 17:15:38 crc kubenswrapper[4708]: I0227 17:15:38.142217 4708 generic.go:334] "Generic (PLEG): container finished" podID="487e829b-b6b1-4c03-8c90-f35a10aee7a2" containerID="40a1fd447ae94e4d97491dc9529bc2298c21f2d4472242b2f559e1561bc7497e" exitCode=0 Feb 27 17:15:38 crc kubenswrapper[4708]: I0227 17:15:38.143640 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wq4dg" event={"ID":"487e829b-b6b1-4c03-8c90-f35a10aee7a2","Type":"ContainerDied","Data":"40a1fd447ae94e4d97491dc9529bc2298c21f2d4472242b2f559e1561bc7497e"} Feb 27 17:15:38 crc kubenswrapper[4708]: W0227 17:15:38.619106 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod100cf8e9_b657_4140_82e0_bf9e976024cc.slice/crio-390d3d7a4b814af7fc700cdcf0ed2bb564cfcdd38345517887cbb8198d77f62c WatchSource:0}: Error finding container 390d3d7a4b814af7fc700cdcf0ed2bb564cfcdd38345517887cbb8198d77f62c: Status 404 returned error can't find the container with id 390d3d7a4b814af7fc700cdcf0ed2bb564cfcdd38345517887cbb8198d77f62c Feb 27 17:15:38 crc kubenswrapper[4708]: I0227 17:15:38.982018 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Feb 27 17:15:38 crc kubenswrapper[4708]: I0227 17:15:38.983355 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.073330 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.152150 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq-config-dkfpj" event={"ID":"100cf8e9-b657-4140-82e0-bf9e976024cc","Type":"ContainerStarted","Data":"b3417a0104cf53c156ef84707529fa10a92f57d5a47d891b57693c2658122b76"} Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.152236 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq-config-dkfpj" event={"ID":"100cf8e9-b657-4140-82e0-bf9e976024cc","Type":"ContainerStarted","Data":"390d3d7a4b814af7fc700cdcf0ed2bb564cfcdd38345517887cbb8198d77f62c"} Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.155355 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerStarted","Data":"82c4054462fef15e551b6536a63a975747cfcb6c6e6be6870d18b02b0cc3595d"} Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.182027 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-6zlsq-config-dkfpj" podStartSLOduration=3.182011877 podStartE2EDuration="3.182011877s" podCreationTimestamp="2026-02-27 17:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:39.168146443 +0000 UTC m=+1337.683944070" watchObservedRunningTime="2026-02-27 17:15:39.182011877 +0000 UTC m=+1337.697809464" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.263084 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.312156 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=39.145821125 podStartE2EDuration="1m27.312125058s" podCreationTimestamp="2026-02-27 17:14:12 +0000 UTC" firstStartedPulling="2026-02-27 17:14:50.54429064 +0000 UTC m=+1289.060088217" lastFinishedPulling="2026-02-27 17:15:38.710594563 +0000 UTC m=+1337.226392150" observedRunningTime="2026-02-27 17:15:39.204392452 +0000 UTC m=+1337.720190069" watchObservedRunningTime="2026-02-27 17:15:39.312125058 +0000 UTC m=+1337.827922665" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.597900 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.615622 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-swiftconf\") pod \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.615824 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-combined-ca-bundle\") pod \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.616491 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-dispersionconf\") pod \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.616537 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7prrs\" (UniqueName: \"kubernetes.io/projected/487e829b-b6b1-4c03-8c90-f35a10aee7a2-kube-api-access-7prrs\") pod \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.616666 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-scripts\") pod \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.616694 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-ring-data-devices\") pod \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.616735 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/487e829b-b6b1-4c03-8c90-f35a10aee7a2-etc-swift\") pod \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\" (UID: \"487e829b-b6b1-4c03-8c90-f35a10aee7a2\") " Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.617981 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/487e829b-b6b1-4c03-8c90-f35a10aee7a2-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "487e829b-b6b1-4c03-8c90-f35a10aee7a2" (UID: "487e829b-b6b1-4c03-8c90-f35a10aee7a2"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.618799 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "487e829b-b6b1-4c03-8c90-f35a10aee7a2" (UID: "487e829b-b6b1-4c03-8c90-f35a10aee7a2"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.624191 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "487e829b-b6b1-4c03-8c90-f35a10aee7a2" (UID: "487e829b-b6b1-4c03-8c90-f35a10aee7a2"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.628599 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/487e829b-b6b1-4c03-8c90-f35a10aee7a2-kube-api-access-7prrs" (OuterVolumeSpecName: "kube-api-access-7prrs") pod "487e829b-b6b1-4c03-8c90-f35a10aee7a2" (UID: "487e829b-b6b1-4c03-8c90-f35a10aee7a2"). InnerVolumeSpecName "kube-api-access-7prrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.644675 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "487e829b-b6b1-4c03-8c90-f35a10aee7a2" (UID: "487e829b-b6b1-4c03-8c90-f35a10aee7a2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.651367 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-scripts" (OuterVolumeSpecName: "scripts") pod "487e829b-b6b1-4c03-8c90-f35a10aee7a2" (UID: "487e829b-b6b1-4c03-8c90-f35a10aee7a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.652925 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "487e829b-b6b1-4c03-8c90-f35a10aee7a2" (UID: "487e829b-b6b1-4c03-8c90-f35a10aee7a2"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.718588 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.718619 4708 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-dispersionconf\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.718628 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7prrs\" (UniqueName: \"kubernetes.io/projected/487e829b-b6b1-4c03-8c90-f35a10aee7a2-kube-api-access-7prrs\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.718638 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.718646 4708 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/487e829b-b6b1-4c03-8c90-f35a10aee7a2-ring-data-devices\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.718654 4708 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/487e829b-b6b1-4c03-8c90-f35a10aee7a2-etc-swift\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:39 crc kubenswrapper[4708]: I0227 17:15:39.718662 4708 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/487e829b-b6b1-4c03-8c90-f35a10aee7a2-swiftconf\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:40 crc kubenswrapper[4708]: I0227 17:15:40.185189 4708 generic.go:334] "Generic (PLEG): container finished" podID="100cf8e9-b657-4140-82e0-bf9e976024cc" containerID="b3417a0104cf53c156ef84707529fa10a92f57d5a47d891b57693c2658122b76" exitCode=0 Feb 27 17:15:40 crc kubenswrapper[4708]: I0227 17:15:40.185460 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq-config-dkfpj" event={"ID":"100cf8e9-b657-4140-82e0-bf9e976024cc","Type":"ContainerDied","Data":"b3417a0104cf53c156ef84707529fa10a92f57d5a47d891b57693c2658122b76"} Feb 27 17:15:40 crc kubenswrapper[4708]: I0227 17:15:40.203773 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-wq4dg" Feb 27 17:15:40 crc kubenswrapper[4708]: I0227 17:15:40.203839 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-wq4dg" event={"ID":"487e829b-b6b1-4c03-8c90-f35a10aee7a2","Type":"ContainerDied","Data":"424b3cf59cd7e35ccb3e10d9ca8245de3e7a83cdb341051b289c59eea4dec243"} Feb 27 17:15:40 crc kubenswrapper[4708]: I0227 17:15:40.203880 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="424b3cf59cd7e35ccb3e10d9ca8245de3e7a83cdb341051b289c59eea4dec243" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.193123 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-v4pxq"] Feb 27 17:15:41 crc kubenswrapper[4708]: E0227 17:15:41.193467 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="487e829b-b6b1-4c03-8c90-f35a10aee7a2" containerName="swift-ring-rebalance" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.193480 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="487e829b-b6b1-4c03-8c90-f35a10aee7a2" containerName="swift-ring-rebalance" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.193670 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="487e829b-b6b1-4c03-8c90-f35a10aee7a2" containerName="swift-ring-rebalance" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.194302 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.209038 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v4pxq"] Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.313885 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-b7e7-account-create-update-985br"] Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.315211 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.317435 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.329609 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b7e7-account-create-update-985br"] Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.356475 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-operator-scripts\") pod \"keystone-db-create-v4pxq\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.356665 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd49n\" (UniqueName: \"kubernetes.io/projected/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-kube-api-access-nd49n\") pod \"keystone-db-create-v4pxq\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.411676 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-dcz66"] Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.413156 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.423466 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dcz66"] Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.457800 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd49n\" (UniqueName: \"kubernetes.io/projected/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-kube-api-access-nd49n\") pod \"keystone-db-create-v4pxq\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.457867 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49315bcd-31dd-4e2e-8874-12904298dba9-operator-scripts\") pod \"keystone-b7e7-account-create-update-985br\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.457916 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr5b8\" (UniqueName: \"kubernetes.io/projected/49315bcd-31dd-4e2e-8874-12904298dba9-kube-api-access-sr5b8\") pod \"keystone-b7e7-account-create-update-985br\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.458022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-operator-scripts\") pod \"keystone-db-create-v4pxq\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.458620 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-operator-scripts\") pod \"keystone-db-create-v4pxq\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.478015 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd49n\" (UniqueName: \"kubernetes.io/projected/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-kube-api-access-nd49n\") pod \"keystone-db-create-v4pxq\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.504866 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-752e-account-create-update-r66l6"] Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.505914 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.510079 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.517216 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-752e-account-create-update-r66l6"] Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.529675 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.559265 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afe8f07a-d35a-4288-a552-9351a6ad0079-operator-scripts\") pod \"placement-db-create-dcz66\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.559353 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cdnl\" (UniqueName: \"kubernetes.io/projected/afe8f07a-d35a-4288-a552-9351a6ad0079-kube-api-access-4cdnl\") pod \"placement-db-create-dcz66\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.559405 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49315bcd-31dd-4e2e-8874-12904298dba9-operator-scripts\") pod \"keystone-b7e7-account-create-update-985br\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.559440 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sr5b8\" (UniqueName: \"kubernetes.io/projected/49315bcd-31dd-4e2e-8874-12904298dba9-kube-api-access-sr5b8\") pod \"keystone-b7e7-account-create-update-985br\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.560714 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49315bcd-31dd-4e2e-8874-12904298dba9-operator-scripts\") pod \"keystone-b7e7-account-create-update-985br\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.581092 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr5b8\" (UniqueName: \"kubernetes.io/projected/49315bcd-31dd-4e2e-8874-12904298dba9-kube-api-access-sr5b8\") pod \"keystone-b7e7-account-create-update-985br\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.630289 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.663097 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afe8f07a-d35a-4288-a552-9351a6ad0079-operator-scripts\") pod \"placement-db-create-dcz66\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.663196 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cdnl\" (UniqueName: \"kubernetes.io/projected/afe8f07a-d35a-4288-a552-9351a6ad0079-kube-api-access-4cdnl\") pod \"placement-db-create-dcz66\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.663240 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-operator-scripts\") pod \"placement-752e-account-create-update-r66l6\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.663326 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9kg7\" (UniqueName: \"kubernetes.io/projected/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-kube-api-access-p9kg7\") pod \"placement-752e-account-create-update-r66l6\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.663917 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afe8f07a-d35a-4288-a552-9351a6ad0079-operator-scripts\") pod \"placement-db-create-dcz66\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.682230 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cdnl\" (UniqueName: \"kubernetes.io/projected/afe8f07a-d35a-4288-a552-9351a6ad0079-kube-api-access-4cdnl\") pod \"placement-db-create-dcz66\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.704413 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.728029 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-6zlsq" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.755286 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dcz66" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.770062 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-operator-scripts\") pod \"placement-752e-account-create-update-r66l6\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.770165 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9kg7\" (UniqueName: \"kubernetes.io/projected/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-kube-api-access-p9kg7\") pod \"placement-752e-account-create-update-r66l6\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.771052 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-operator-scripts\") pod \"placement-752e-account-create-update-r66l6\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.805178 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9kg7\" (UniqueName: \"kubernetes.io/projected/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-kube-api-access-p9kg7\") pod \"placement-752e-account-create-update-r66l6\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.835855 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.871062 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run\") pod \"100cf8e9-b657-4140-82e0-bf9e976024cc\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.871200 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run-ovn\") pod \"100cf8e9-b657-4140-82e0-bf9e976024cc\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.871282 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-additional-scripts\") pod \"100cf8e9-b657-4140-82e0-bf9e976024cc\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.871335 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pz2x6\" (UniqueName: \"kubernetes.io/projected/100cf8e9-b657-4140-82e0-bf9e976024cc-kube-api-access-pz2x6\") pod \"100cf8e9-b657-4140-82e0-bf9e976024cc\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.871360 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-scripts\") pod \"100cf8e9-b657-4140-82e0-bf9e976024cc\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.871381 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-log-ovn\") pod \"100cf8e9-b657-4140-82e0-bf9e976024cc\" (UID: \"100cf8e9-b657-4140-82e0-bf9e976024cc\") " Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.872392 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run" (OuterVolumeSpecName: "var-run") pod "100cf8e9-b657-4140-82e0-bf9e976024cc" (UID: "100cf8e9-b657-4140-82e0-bf9e976024cc"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.872419 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "100cf8e9-b657-4140-82e0-bf9e976024cc" (UID: "100cf8e9-b657-4140-82e0-bf9e976024cc"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.873458 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "100cf8e9-b657-4140-82e0-bf9e976024cc" (UID: "100cf8e9-b657-4140-82e0-bf9e976024cc"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.875425 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "100cf8e9-b657-4140-82e0-bf9e976024cc" (UID: "100cf8e9-b657-4140-82e0-bf9e976024cc"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.876024 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-scripts" (OuterVolumeSpecName: "scripts") pod "100cf8e9-b657-4140-82e0-bf9e976024cc" (UID: "100cf8e9-b657-4140-82e0-bf9e976024cc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.879319 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/100cf8e9-b657-4140-82e0-bf9e976024cc-kube-api-access-pz2x6" (OuterVolumeSpecName: "kube-api-access-pz2x6") pod "100cf8e9-b657-4140-82e0-bf9e976024cc" (UID: "100cf8e9-b657-4140-82e0-bf9e976024cc"). InnerVolumeSpecName "kube-api-access-pz2x6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.973981 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pz2x6\" (UniqueName: \"kubernetes.io/projected/100cf8e9-b657-4140-82e0-bf9e976024cc-kube-api-access-pz2x6\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.974010 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.974019 4708 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.974028 4708 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.974037 4708 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/100cf8e9-b657-4140-82e0-bf9e976024cc-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:41 crc kubenswrapper[4708]: I0227 17:15:41.974045 4708 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/100cf8e9-b657-4140-82e0-bf9e976024cc-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.047624 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-v4pxq"] Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.224038 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq-config-dkfpj" event={"ID":"100cf8e9-b657-4140-82e0-bf9e976024cc","Type":"ContainerDied","Data":"390d3d7a4b814af7fc700cdcf0ed2bb564cfcdd38345517887cbb8198d77f62c"} Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.224080 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="390d3d7a4b814af7fc700cdcf0ed2bb564cfcdd38345517887cbb8198d77f62c" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.224150 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-dkfpj" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.226939 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v4pxq" event={"ID":"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28","Type":"ContainerStarted","Data":"12c95044025665df7ee298f48210480cce4c6ac54f82d65c761296370207d966"} Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.291453 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6zlsq-config-dkfpj"] Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.335069 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6zlsq-config-dkfpj"] Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.348371 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-dcz66"] Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.358168 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-6zlsq-config-b5crb"] Feb 27 17:15:42 crc kubenswrapper[4708]: E0227 17:15:42.358675 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="100cf8e9-b657-4140-82e0-bf9e976024cc" containerName="ovn-config" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.358690 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="100cf8e9-b657-4140-82e0-bf9e976024cc" containerName="ovn-config" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.358945 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="100cf8e9-b657-4140-82e0-bf9e976024cc" containerName="ovn-config" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.360705 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.363416 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.377932 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6zlsq-config-b5crb"] Feb 27 17:15:42 crc kubenswrapper[4708]: W0227 17:15:42.443507 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod588e4d1d_d7fe_425f_9fe3_032b1afd18eb.slice/crio-2be5ccfd4a5d07798c203bb085dc4725cb0785213ee2e8b15f25cd649f7d9595 WatchSource:0}: Error finding container 2be5ccfd4a5d07798c203bb085dc4725cb0785213ee2e8b15f25cd649f7d9595: Status 404 returned error can't find the container with id 2be5ccfd4a5d07798c203bb085dc4725cb0785213ee2e8b15f25cd649f7d9595 Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.445332 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-752e-account-create-update-r66l6"] Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.455147 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-b7e7-account-create-update-985br"] Feb 27 17:15:42 crc kubenswrapper[4708]: W0227 17:15:42.459963 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod49315bcd_31dd_4e2e_8874_12904298dba9.slice/crio-3c4aa1d87fab17a45bcbd8c1b9be184e003370412491b584b82a6c198827da5d WatchSource:0}: Error finding container 3c4aa1d87fab17a45bcbd8c1b9be184e003370412491b584b82a6c198827da5d: Status 404 returned error can't find the container with id 3c4aa1d87fab17a45bcbd8c1b9be184e003370412491b584b82a6c198827da5d Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.490557 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-additional-scripts\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.490642 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-scripts\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.490739 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.490817 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjjxq\" (UniqueName: \"kubernetes.io/projected/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-kube-api-access-wjjxq\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.497741 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run-ovn\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.497814 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-log-ovn\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599420 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjjxq\" (UniqueName: \"kubernetes.io/projected/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-kube-api-access-wjjxq\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599496 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run-ovn\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599501 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599527 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-log-ovn\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599589 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-log-ovn\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599631 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-additional-scripts\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599668 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run-ovn\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.599699 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-scripts\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.600640 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-additional-scripts\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.602043 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-scripts\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.618840 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjjxq\" (UniqueName: \"kubernetes.io/projected/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-kube-api-access-wjjxq\") pod \"ovn-controller-6zlsq-config-b5crb\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:42 crc kubenswrapper[4708]: I0227 17:15:42.708519 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.197887 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="238aef54-b0dd-495b-a5f8-66cc43b12088" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.237269 4708 generic.go:334] "Generic (PLEG): container finished" podID="588e4d1d-d7fe-425f-9fe3-032b1afd18eb" containerID="8a4256956260177c4867a538b012f9139f60f4ac02e084fdcb7655705c504d8e" exitCode=0 Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.237333 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-752e-account-create-update-r66l6" event={"ID":"588e4d1d-d7fe-425f-9fe3-032b1afd18eb","Type":"ContainerDied","Data":"8a4256956260177c4867a538b012f9139f60f4ac02e084fdcb7655705c504d8e"} Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.237357 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-752e-account-create-update-r66l6" event={"ID":"588e4d1d-d7fe-425f-9fe3-032b1afd18eb","Type":"ContainerStarted","Data":"2be5ccfd4a5d07798c203bb085dc4725cb0785213ee2e8b15f25cd649f7d9595"} Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.239223 4708 generic.go:334] "Generic (PLEG): container finished" podID="afe8f07a-d35a-4288-a552-9351a6ad0079" containerID="c7b5df1574b323f13bae2164b67198681268e4e5cf4216396dab7a06607f9b6d" exitCode=0 Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.239321 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dcz66" event={"ID":"afe8f07a-d35a-4288-a552-9351a6ad0079","Type":"ContainerDied","Data":"c7b5df1574b323f13bae2164b67198681268e4e5cf4216396dab7a06607f9b6d"} Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.239352 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dcz66" event={"ID":"afe8f07a-d35a-4288-a552-9351a6ad0079","Type":"ContainerStarted","Data":"48c59cdd722720926d3b979cf7d636512ea35f131cb6c379a63fb3e848cf18af"} Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.240982 4708 generic.go:334] "Generic (PLEG): container finished" podID="0cff7ba0-90d0-441b-ab1a-9d30c9f29e28" containerID="4a03e50aed2737f235e797dc38c6490aa11ff4b1ff82b6435958d539f72864d4" exitCode=0 Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.241058 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v4pxq" event={"ID":"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28","Type":"ContainerDied","Data":"4a03e50aed2737f235e797dc38c6490aa11ff4b1ff82b6435958d539f72864d4"} Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.243972 4708 generic.go:334] "Generic (PLEG): container finished" podID="49315bcd-31dd-4e2e-8874-12904298dba9" containerID="3e4a642d8dc4e9bb2356458753a6c853e934f8ee74d53cfd03cc2d8dc36c1877" exitCode=0 Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.244026 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b7e7-account-create-update-985br" event={"ID":"49315bcd-31dd-4e2e-8874-12904298dba9","Type":"ContainerDied","Data":"3e4a642d8dc4e9bb2356458753a6c853e934f8ee74d53cfd03cc2d8dc36c1877"} Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.244084 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b7e7-account-create-update-985br" event={"ID":"49315bcd-31dd-4e2e-8874-12904298dba9","Type":"ContainerStarted","Data":"3c4aa1d87fab17a45bcbd8c1b9be184e003370412491b584b82a6c198827da5d"} Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.262037 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-6zlsq-config-b5crb"] Feb 27 17:15:43 crc kubenswrapper[4708]: W0227 17:15:43.318409 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod39f0627e_0d87_4f9a_bfd3_4ae4bd1113f9.slice/crio-a5d197f4b89d18be42a07e349ab09fd6d27c60d91b6f875821f60c74e459df2b WatchSource:0}: Error finding container a5d197f4b89d18be42a07e349ab09fd6d27c60d91b6f875821f60c74e459df2b: Status 404 returned error can't find the container with id a5d197f4b89d18be42a07e349ab09fd6d27c60d91b6f875821f60c74e459df2b Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.636557 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.637087 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:43 crc kubenswrapper[4708]: I0227 17:15:43.638475 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.246599 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="100cf8e9-b657-4140-82e0-bf9e976024cc" path="/var/lib/kubelet/pods/100cf8e9-b657-4140-82e0-bf9e976024cc/volumes" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.256597 4708 generic.go:334] "Generic (PLEG): container finished" podID="39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" containerID="27e52105f4da2273e7614a61c44724eadc85f029309fd45d14d6569a0b898e67" exitCode=0 Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.257083 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq-config-b5crb" event={"ID":"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9","Type":"ContainerDied","Data":"27e52105f4da2273e7614a61c44724eadc85f029309fd45d14d6569a0b898e67"} Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.257112 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq-config-b5crb" event={"ID":"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9","Type":"ContainerStarted","Data":"a5d197f4b89d18be42a07e349ab09fd6d27c60d91b6f875821f60c74e459df2b"} Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.259908 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.306580 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.787080 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.854131 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-operator-scripts\") pod \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.854198 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9kg7\" (UniqueName: \"kubernetes.io/projected/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-kube-api-access-p9kg7\") pod \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\" (UID: \"588e4d1d-d7fe-425f-9fe3-032b1afd18eb\") " Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.855423 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "588e4d1d-d7fe-425f-9fe3-032b1afd18eb" (UID: "588e4d1d-d7fe-425f-9fe3-032b1afd18eb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.882475 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-kube-api-access-p9kg7" (OuterVolumeSpecName: "kube-api-access-p9kg7") pod "588e4d1d-d7fe-425f-9fe3-032b1afd18eb" (UID: "588e4d1d-d7fe-425f-9fe3-032b1afd18eb"). InnerVolumeSpecName "kube-api-access-p9kg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.956096 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.956120 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9kg7\" (UniqueName: \"kubernetes.io/projected/588e4d1d-d7fe-425f-9fe3-032b1afd18eb-kube-api-access-p9kg7\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.956228 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dcz66" Feb 27 17:15:44 crc kubenswrapper[4708]: I0227 17:15:44.977054 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.037987 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.057825 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afe8f07a-d35a-4288-a552-9351a6ad0079-operator-scripts\") pod \"afe8f07a-d35a-4288-a552-9351a6ad0079\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.057884 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nd49n\" (UniqueName: \"kubernetes.io/projected/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-kube-api-access-nd49n\") pod \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.057909 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-operator-scripts\") pod \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\" (UID: \"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.057957 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cdnl\" (UniqueName: \"kubernetes.io/projected/afe8f07a-d35a-4288-a552-9351a6ad0079-kube-api-access-4cdnl\") pod \"afe8f07a-d35a-4288-a552-9351a6ad0079\" (UID: \"afe8f07a-d35a-4288-a552-9351a6ad0079\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.059747 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/afe8f07a-d35a-4288-a552-9351a6ad0079-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "afe8f07a-d35a-4288-a552-9351a6ad0079" (UID: "afe8f07a-d35a-4288-a552-9351a6ad0079"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.060415 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "0cff7ba0-90d0-441b-ab1a-9d30c9f29e28" (UID: "0cff7ba0-90d0-441b-ab1a-9d30c9f29e28"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.066718 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afe8f07a-d35a-4288-a552-9351a6ad0079-kube-api-access-4cdnl" (OuterVolumeSpecName: "kube-api-access-4cdnl") pod "afe8f07a-d35a-4288-a552-9351a6ad0079" (UID: "afe8f07a-d35a-4288-a552-9351a6ad0079"). InnerVolumeSpecName "kube-api-access-4cdnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.068763 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-kube-api-access-nd49n" (OuterVolumeSpecName: "kube-api-access-nd49n") pod "0cff7ba0-90d0-441b-ab1a-9d30c9f29e28" (UID: "0cff7ba0-90d0-441b-ab1a-9d30c9f29e28"). InnerVolumeSpecName "kube-api-access-nd49n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.159712 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sr5b8\" (UniqueName: \"kubernetes.io/projected/49315bcd-31dd-4e2e-8874-12904298dba9-kube-api-access-sr5b8\") pod \"49315bcd-31dd-4e2e-8874-12904298dba9\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.159994 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49315bcd-31dd-4e2e-8874-12904298dba9-operator-scripts\") pod \"49315bcd-31dd-4e2e-8874-12904298dba9\" (UID: \"49315bcd-31dd-4e2e-8874-12904298dba9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.160415 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/afe8f07a-d35a-4288-a552-9351a6ad0079-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.160436 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nd49n\" (UniqueName: \"kubernetes.io/projected/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-kube-api-access-nd49n\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.160449 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.160458 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cdnl\" (UniqueName: \"kubernetes.io/projected/afe8f07a-d35a-4288-a552-9351a6ad0079-kube-api-access-4cdnl\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.160530 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49315bcd-31dd-4e2e-8874-12904298dba9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "49315bcd-31dd-4e2e-8874-12904298dba9" (UID: "49315bcd-31dd-4e2e-8874-12904298dba9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.163484 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49315bcd-31dd-4e2e-8874-12904298dba9-kube-api-access-sr5b8" (OuterVolumeSpecName: "kube-api-access-sr5b8") pod "49315bcd-31dd-4e2e-8874-12904298dba9" (UID: "49315bcd-31dd-4e2e-8874-12904298dba9"). InnerVolumeSpecName "kube-api-access-sr5b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.262475 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/49315bcd-31dd-4e2e-8874-12904298dba9-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.262530 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sr5b8\" (UniqueName: \"kubernetes.io/projected/49315bcd-31dd-4e2e-8874-12904298dba9-kube-api-access-sr5b8\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.272149 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-dcz66" event={"ID":"afe8f07a-d35a-4288-a552-9351a6ad0079","Type":"ContainerDied","Data":"48c59cdd722720926d3b979cf7d636512ea35f131cb6c379a63fb3e848cf18af"} Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.272189 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="48c59cdd722720926d3b979cf7d636512ea35f131cb6c379a63fb3e848cf18af" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.272231 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-dcz66" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.274248 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-v4pxq" event={"ID":"0cff7ba0-90d0-441b-ab1a-9d30c9f29e28","Type":"ContainerDied","Data":"12c95044025665df7ee298f48210480cce4c6ac54f82d65c761296370207d966"} Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.274271 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12c95044025665df7ee298f48210480cce4c6ac54f82d65c761296370207d966" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.274323 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-v4pxq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.286228 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-b7e7-account-create-update-985br" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.286256 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-b7e7-account-create-update-985br" event={"ID":"49315bcd-31dd-4e2e-8874-12904298dba9","Type":"ContainerDied","Data":"3c4aa1d87fab17a45bcbd8c1b9be184e003370412491b584b82a6c198827da5d"} Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.286336 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c4aa1d87fab17a45bcbd8c1b9be184e003370412491b584b82a6c198827da5d" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.289470 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-752e-account-create-update-r66l6" event={"ID":"588e4d1d-d7fe-425f-9fe3-032b1afd18eb","Type":"ContainerDied","Data":"2be5ccfd4a5d07798c203bb085dc4725cb0785213ee2e8b15f25cd649f7d9595"} Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.289504 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-752e-account-create-update-r66l6" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.289763 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2be5ccfd4a5d07798c203bb085dc4725cb0785213ee2e8b15f25cd649f7d9595" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.420261 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-xjljq"] Feb 27 17:15:45 crc kubenswrapper[4708]: E0227 17:15:45.420801 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49315bcd-31dd-4e2e-8874-12904298dba9" containerName="mariadb-account-create-update" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.420830 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="49315bcd-31dd-4e2e-8874-12904298dba9" containerName="mariadb-account-create-update" Feb 27 17:15:45 crc kubenswrapper[4708]: E0227 17:15:45.421178 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="588e4d1d-d7fe-425f-9fe3-032b1afd18eb" containerName="mariadb-account-create-update" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.421205 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="588e4d1d-d7fe-425f-9fe3-032b1afd18eb" containerName="mariadb-account-create-update" Feb 27 17:15:45 crc kubenswrapper[4708]: E0227 17:15:45.421255 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cff7ba0-90d0-441b-ab1a-9d30c9f29e28" containerName="mariadb-database-create" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.421269 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cff7ba0-90d0-441b-ab1a-9d30c9f29e28" containerName="mariadb-database-create" Feb 27 17:15:45 crc kubenswrapper[4708]: E0227 17:15:45.421284 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afe8f07a-d35a-4288-a552-9351a6ad0079" containerName="mariadb-database-create" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.421298 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="afe8f07a-d35a-4288-a552-9351a6ad0079" containerName="mariadb-database-create" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.421638 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cff7ba0-90d0-441b-ab1a-9d30c9f29e28" containerName="mariadb-database-create" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.421667 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="afe8f07a-d35a-4288-a552-9351a6ad0079" containerName="mariadb-database-create" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.421693 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="588e4d1d-d7fe-425f-9fe3-032b1afd18eb" containerName="mariadb-account-create-update" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.421712 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="49315bcd-31dd-4e2e-8874-12904298dba9" containerName="mariadb-account-create-update" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.422704 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.447912 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xjljq"] Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.553774 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-27ba-account-create-update-k2rtk"] Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.555359 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.557234 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.573785 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jfvl\" (UniqueName: \"kubernetes.io/projected/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-kube-api-access-7jfvl\") pod \"glance-db-create-xjljq\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.575068 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-operator-scripts\") pod \"glance-db-create-xjljq\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.583903 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-27ba-account-create-update-k2rtk"] Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.677878 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-operator-scripts\") pod \"glance-db-create-xjljq\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.677945 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-operator-scripts\") pod \"glance-27ba-account-create-update-k2rtk\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.677998 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jfvl\" (UniqueName: \"kubernetes.io/projected/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-kube-api-access-7jfvl\") pod \"glance-db-create-xjljq\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.678073 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kltvr\" (UniqueName: \"kubernetes.io/projected/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-kube-api-access-kltvr\") pod \"glance-27ba-account-create-update-k2rtk\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.678836 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-operator-scripts\") pod \"glance-db-create-xjljq\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.704771 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jfvl\" (UniqueName: \"kubernetes.io/projected/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-kube-api-access-7jfvl\") pod \"glance-db-create-xjljq\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.764554 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xjljq" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.785525 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kltvr\" (UniqueName: \"kubernetes.io/projected/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-kube-api-access-kltvr\") pod \"glance-27ba-account-create-update-k2rtk\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.785648 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-operator-scripts\") pod \"glance-27ba-account-create-update-k2rtk\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.786516 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-operator-scripts\") pod \"glance-27ba-account-create-update-k2rtk\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.786530 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.818430 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kltvr\" (UniqueName: \"kubernetes.io/projected/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-kube-api-access-kltvr\") pod \"glance-27ba-account-create-update-k2rtk\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.894763 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-scripts\") pod \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.895715 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.898111 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-scripts" (OuterVolumeSpecName: "scripts") pod "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" (UID: "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.898904 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run-ovn\") pod \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.899113 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjjxq\" (UniqueName: \"kubernetes.io/projected/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-kube-api-access-wjjxq\") pod \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.899318 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-log-ovn\") pod \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.899395 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run\") pod \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.899425 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-additional-scripts\") pod \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\" (UID: \"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9\") " Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.900430 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" (UID: "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.900476 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" (UID: "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.902961 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" (UID: "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.903035 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run" (OuterVolumeSpecName: "var-run") pod "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" (UID: "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:15:45 crc kubenswrapper[4708]: I0227 17:15:45.905999 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-kube-api-access-wjjxq" (OuterVolumeSpecName: "kube-api-access-wjjxq") pod "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" (UID: "39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9"). InnerVolumeSpecName "kube-api-access-wjjxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.001574 4708 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.001597 4708 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-additional-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.001608 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.001621 4708 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-run-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.001631 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjjxq\" (UniqueName: \"kubernetes.io/projected/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-kube-api-access-wjjxq\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.001641 4708 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9-var-log-ovn\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.303637 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-6zlsq-config-b5crb" event={"ID":"39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9","Type":"ContainerDied","Data":"a5d197f4b89d18be42a07e349ab09fd6d27c60d91b6f875821f60c74e459df2b"} Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.303689 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5d197f4b89d18be42a07e349ab09fd6d27c60d91b6f875821f60c74e459df2b" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.303777 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-6zlsq-config-b5crb" Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.329419 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-xjljq"] Feb 27 17:15:46 crc kubenswrapper[4708]: W0227 17:15:46.334808 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9cdcf07_4f86_4f69_bc25_2e4f7841fb2a.slice/crio-535fd3798b0fc3cea4322b245f9f4949338ae6a233d40747c9bc5009edc1f93f WatchSource:0}: Error finding container 535fd3798b0fc3cea4322b245f9f4949338ae6a233d40747c9bc5009edc1f93f: Status 404 returned error can't find the container with id 535fd3798b0fc3cea4322b245f9f4949338ae6a233d40747c9bc5009edc1f93f Feb 27 17:15:46 crc kubenswrapper[4708]: W0227 17:15:46.456498 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18c1e66f_ed02_4bf8_be04_bf5d722eb5a1.slice/crio-04e1939c6cd1ba5924ba36745a36175f60608be2c062e301dad64d2c02c2b4f6 WatchSource:0}: Error finding container 04e1939c6cd1ba5924ba36745a36175f60608be2c062e301dad64d2c02c2b4f6: Status 404 returned error can't find the container with id 04e1939c6cd1ba5924ba36745a36175f60608be2c062e301dad64d2c02c2b4f6 Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.458084 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-27ba-account-create-update-k2rtk"] Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.702733 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.703114 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="prometheus" containerID="cri-o://3833339ae9a1512b80609665e99a753ad63e0b74dff9ef6306f93413e6a2d44e" gracePeriod=600 Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.703264 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="config-reloader" containerID="cri-o://cbca860b738855f082dafc10482664274da1c261c7c04647e10b6647a4499dee" gracePeriod=600 Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.703287 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/prometheus-metric-storage-0" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="thanos-sidecar" containerID="cri-o://82c4054462fef15e551b6536a63a975747cfcb6c6e6be6870d18b02b0cc3595d" gracePeriod=600 Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.917016 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-6zlsq-config-b5crb"] Feb 27 17:15:46 crc kubenswrapper[4708]: I0227 17:15:46.923295 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-6zlsq-config-b5crb"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.317739 4708 generic.go:334] "Generic (PLEG): container finished" podID="c129cc00-13ca-4502-aa1b-866133b164a9" containerID="82c4054462fef15e551b6536a63a975747cfcb6c6e6be6870d18b02b0cc3595d" exitCode=0 Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.317786 4708 generic.go:334] "Generic (PLEG): container finished" podID="c129cc00-13ca-4502-aa1b-866133b164a9" containerID="cbca860b738855f082dafc10482664274da1c261c7c04647e10b6647a4499dee" exitCode=0 Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.317801 4708 generic.go:334] "Generic (PLEG): container finished" podID="c129cc00-13ca-4502-aa1b-866133b164a9" containerID="3833339ae9a1512b80609665e99a753ad63e0b74dff9ef6306f93413e6a2d44e" exitCode=0 Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.317890 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerDied","Data":"82c4054462fef15e551b6536a63a975747cfcb6c6e6be6870d18b02b0cc3595d"} Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.317963 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerDied","Data":"cbca860b738855f082dafc10482664274da1c261c7c04647e10b6647a4499dee"} Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.317990 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerDied","Data":"3833339ae9a1512b80609665e99a753ad63e0b74dff9ef6306f93413e6a2d44e"} Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.318134 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.320130 4708 generic.go:334] "Generic (PLEG): container finished" podID="18c1e66f-ed02-4bf8-be04-bf5d722eb5a1" containerID="0ec36a861e45a7d8f8ff82966674f72e22028da05ee4a768a4e48524fa376534" exitCode=0 Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.320223 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-27ba-account-create-update-k2rtk" event={"ID":"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1","Type":"ContainerDied","Data":"0ec36a861e45a7d8f8ff82966674f72e22028da05ee4a768a4e48524fa376534"} Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.320256 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-27ba-account-create-update-k2rtk" event={"ID":"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1","Type":"ContainerStarted","Data":"04e1939c6cd1ba5924ba36745a36175f60608be2c062e301dad64d2c02c2b4f6"} Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.323286 4708 generic.go:334] "Generic (PLEG): container finished" podID="a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a" containerID="5cd9cd7696d0ddfb346b1881e68d3aab23ba2ccd6d611b06527401054074620f" exitCode=0 Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.323371 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xjljq" event={"ID":"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a","Type":"ContainerDied","Data":"5cd9cd7696d0ddfb346b1881e68d3aab23ba2ccd6d611b06527401054074620f"} Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.323466 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xjljq" event={"ID":"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a","Type":"ContainerStarted","Data":"535fd3798b0fc3cea4322b245f9f4949338ae6a233d40747c9bc5009edc1f93f"} Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.394666 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-qt4x7"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.415911 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-qt4x7"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.481873 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-nv8ss"] Feb 27 17:15:47 crc kubenswrapper[4708]: E0227 17:15:47.482211 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" containerName="ovn-config" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.482228 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" containerName="ovn-config" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.482396 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" containerName="ovn-config" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.483039 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.487121 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.494820 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nv8ss"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.537139 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcf0e9a-a14c-4b1f-8406-22719bee5979-operator-scripts\") pod \"root-account-create-update-nv8ss\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.537309 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txrqs\" (UniqueName: \"kubernetes.io/projected/6bcf0e9a-a14c-4b1f-8406-22719bee5979-kube-api-access-txrqs\") pod \"root-account-create-update-nv8ss\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.639190 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txrqs\" (UniqueName: \"kubernetes.io/projected/6bcf0e9a-a14c-4b1f-8406-22719bee5979-kube-api-access-txrqs\") pod \"root-account-create-update-nv8ss\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.639649 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcf0e9a-a14c-4b1f-8406-22719bee5979-operator-scripts\") pod \"root-account-create-update-nv8ss\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.640688 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcf0e9a-a14c-4b1f-8406-22719bee5979-operator-scripts\") pod \"root-account-create-update-nv8ss\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.690196 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txrqs\" (UniqueName: \"kubernetes.io/projected/6bcf0e9a-a14c-4b1f-8406-22719bee5979-kube-api-access-txrqs\") pod \"root-account-create-update-nv8ss\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.691211 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.803298 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-create-qdbv7"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.805100 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.811435 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.841262 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-qdbv7"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.895498 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.937632 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-kqhws"] Feb 27 17:15:47 crc kubenswrapper[4708]: E0227 17:15:47.938164 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="config-reloader" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.938180 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="config-reloader" Feb 27 17:15:47 crc kubenswrapper[4708]: E0227 17:15:47.938195 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="thanos-sidecar" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.938213 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="thanos-sidecar" Feb 27 17:15:47 crc kubenswrapper[4708]: E0227 17:15:47.938228 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="prometheus" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.938235 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="prometheus" Feb 27 17:15:47 crc kubenswrapper[4708]: E0227 17:15:47.938245 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="init-config-reloader" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.938252 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="init-config-reloader" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.938446 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="config-reloader" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.938456 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="thanos-sidecar" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.938480 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" containerName="prometheus" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.942369 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.953048 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76c9acf-0333-4355-9c57-46fd59f26866-operator-scripts\") pod \"cloudkitty-db-create-qdbv7\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.953263 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75s5q\" (UniqueName: \"kubernetes.io/projected/f76c9acf-0333-4355-9c57-46fd59f26866-kube-api-access-75s5q\") pod \"cloudkitty-db-create-qdbv7\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.957763 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-7abe-account-create-update-649dm"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.961462 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.963481 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.973292 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kqhws"] Feb 27 17:15:47 crc kubenswrapper[4708]: I0227 17:15:47.986914 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7abe-account-create-update-649dm"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.055697 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-1\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.058738 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-config\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.059754 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-1" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-1") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-1". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.059749 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-2\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.060774 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-web-config\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.060970 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8jsv9\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-kube-api-access-8jsv9\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.061135 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-tls-assets\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.061305 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-thanos-prometheus-http-client-file\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.061578 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-0\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.061909 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-metric-storage-db\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.062069 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c129cc00-13ca-4502-aa1b-866133b164a9-config-out\") pod \"c129cc00-13ca-4502-aa1b-866133b164a9\" (UID: \"c129cc00-13ca-4502-aa1b-866133b164a9\") " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.062670 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76c9acf-0333-4355-9c57-46fd59f26866-operator-scripts\") pod \"cloudkitty-db-create-qdbv7\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.063907 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-0" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-0") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.064490 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76c9acf-0333-4355-9c57-46fd59f26866-operator-scripts\") pod \"cloudkitty-db-create-qdbv7\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.065276 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-operator-scripts\") pod \"cinder-7abe-account-create-update-649dm\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.066251 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6n9bm\" (UniqueName: \"kubernetes.io/projected/b1bfca09-eb7d-485b-97b2-84ba0df72b73-kube-api-access-6n9bm\") pod \"cinder-db-create-kqhws\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.065982 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-2" (OuterVolumeSpecName: "prometheus-metric-storage-rulefiles-2") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "prometheus-metric-storage-rulefiles-2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.066606 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75s5q\" (UniqueName: \"kubernetes.io/projected/f76c9acf-0333-4355-9c57-46fd59f26866-kube-api-access-75s5q\") pod \"cloudkitty-db-create-qdbv7\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.070786 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-config" (OuterVolumeSpecName: "config") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.071918 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c129cc00-13ca-4502-aa1b-866133b164a9-config-out" (OuterVolumeSpecName: "config-out") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.071930 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.075293 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.082777 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-kube-api-access-8jsv9" (OuterVolumeSpecName: "kube-api-access-8jsv9") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "kube-api-access-8jsv9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085130 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8mzp\" (UniqueName: \"kubernetes.io/projected/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-kube-api-access-n8mzp\") pod \"cinder-7abe-account-create-update-649dm\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085225 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bfca09-eb7d-485b-97b2-84ba0df72b73-operator-scripts\") pod \"cinder-db-create-kqhws\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085431 4708 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-2\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085452 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8jsv9\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-kube-api-access-8jsv9\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085462 4708 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c129cc00-13ca-4502-aa1b-866133b164a9-tls-assets\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085473 4708 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-thanos-prometheus-http-client-file\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085483 4708 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085496 4708 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c129cc00-13ca-4502-aa1b-866133b164a9-config-out\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085510 4708 reconciler_common.go:293] "Volume detached for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/c129cc00-13ca-4502-aa1b-866133b164a9-prometheus-metric-storage-rulefiles-1\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.085523 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.094309 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42" (OuterVolumeSpecName: "prometheus-metric-storage-db") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.098658 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75s5q\" (UniqueName: \"kubernetes.io/projected/f76c9acf-0333-4355-9c57-46fd59f26866-kube-api-access-75s5q\") pod \"cloudkitty-db-create-qdbv7\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.099794 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-web-config" (OuterVolumeSpecName: "web-config") pod "c129cc00-13ca-4502-aa1b-866133b164a9" (UID: "c129cc00-13ca-4502-aa1b-866133b164a9"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.179331 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-h7hx9"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.180604 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.185766 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.185982 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shlxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.186091 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.186333 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.187210 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bfca09-eb7d-485b-97b2-84ba0df72b73-operator-scripts\") pod \"cinder-db-create-kqhws\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.187255 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-operator-scripts\") pod \"cinder-7abe-account-create-update-649dm\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.187285 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6n9bm\" (UniqueName: \"kubernetes.io/projected/b1bfca09-eb7d-485b-97b2-84ba0df72b73-kube-api-access-6n9bm\") pod \"cinder-db-create-kqhws\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.187410 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8mzp\" (UniqueName: \"kubernetes.io/projected/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-kube-api-access-n8mzp\") pod \"cinder-7abe-account-create-update-649dm\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.187476 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") on node \"crc\" " Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.187491 4708 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c129cc00-13ca-4502-aa1b-866133b164a9-web-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.188669 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-operator-scripts\") pod \"cinder-7abe-account-create-update-649dm\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.189556 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bfca09-eb7d-485b-97b2-84ba0df72b73-operator-scripts\") pod \"cinder-db-create-kqhws\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.200282 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h7hx9"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.214011 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8mzp\" (UniqueName: \"kubernetes.io/projected/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-kube-api-access-n8mzp\") pod \"cinder-7abe-account-create-update-649dm\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.216916 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-136d-account-create-update-pwh4j"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.218180 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.219599 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6n9bm\" (UniqueName: \"kubernetes.io/projected/b1bfca09-eb7d-485b-97b2-84ba0df72b73-kube-api-access-6n9bm\") pod \"cinder-db-create-kqhws\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.222139 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.226048 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.233105 4708 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.233243 4708 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42") on node "crc" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.247275 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9" path="/var/lib/kubelet/pods/39f0627e-0d87-4f9a-bfd3-4ae4bd1113f9/volumes" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.247959 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b484936d-0feb-4107-a28a-2e0c7ac7e267" path="/var/lib/kubelet/pods/b484936d-0feb-4107-a28a-2e0c7ac7e267/volumes" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.260569 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-136d-account-create-update-pwh4j"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.285292 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.293901 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76927836-595f-41d2-ba31-e1e4de928b09-operator-scripts\") pod \"neutron-136d-account-create-update-pwh4j\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.294036 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgts4\" (UniqueName: \"kubernetes.io/projected/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-kube-api-access-lgts4\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.294061 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-combined-ca-bundle\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.294085 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzzgh\" (UniqueName: \"kubernetes.io/projected/76927836-595f-41d2-ba31-e1e4de928b09-kube-api-access-kzzgh\") pod \"neutron-136d-account-create-update-pwh4j\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.294144 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-config-data\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.294204 4708 reconciler_common.go:293] "Volume detached for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.301129 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-4zfxn"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.302571 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.325350 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.344484 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-2006-account-create-update-njkx2"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.345869 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.359487 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.396799 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-config-data\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.397112 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76927836-595f-41d2-ba31-e1e4de928b09-operator-scripts\") pod \"neutron-136d-account-create-update-pwh4j\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.397292 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgts4\" (UniqueName: \"kubernetes.io/projected/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-kube-api-access-lgts4\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.397366 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-combined-ca-bundle\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.397461 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzzgh\" (UniqueName: \"kubernetes.io/projected/76927836-595f-41d2-ba31-e1e4de928b09-kube-api-access-kzzgh\") pod \"neutron-136d-account-create-update-pwh4j\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.399825 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76927836-595f-41d2-ba31-e1e4de928b09-operator-scripts\") pod \"neutron-136d-account-create-update-pwh4j\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.400240 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.400886 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"c129cc00-13ca-4502-aa1b-866133b164a9","Type":"ContainerDied","Data":"f99dd294c93531ac750194c523b6d510397a9588a4dda378073bae6d42b4ecc4"} Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.401053 4708 scope.go:117] "RemoveContainer" containerID="82c4054462fef15e551b6536a63a975747cfcb6c6e6be6870d18b02b0cc3595d" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.403172 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-combined-ca-bundle\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.403716 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-config-data\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.420500 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2006-account-create-update-njkx2"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.422825 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgts4\" (UniqueName: \"kubernetes.io/projected/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-kube-api-access-lgts4\") pod \"keystone-db-sync-h7hx9\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.456483 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4zfxn"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.464080 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzzgh\" (UniqueName: \"kubernetes.io/projected/76927836-595f-41d2-ba31-e1e4de928b09-kube-api-access-kzzgh\") pod \"neutron-136d-account-create-update-pwh4j\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.501352 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46zch\" (UniqueName: \"kubernetes.io/projected/115ecd43-9912-4bf4-933f-4fa0497f0a9d-kube-api-access-46zch\") pod \"barbican-db-create-4zfxn\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.501426 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flr2x\" (UniqueName: \"kubernetes.io/projected/87b33cfd-36db-424a-9225-a9a35b8a8562-kube-api-access-flr2x\") pod \"barbican-2006-account-create-update-njkx2\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.501482 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b33cfd-36db-424a-9225-a9a35b8a8562-operator-scripts\") pod \"barbican-2006-account-create-update-njkx2\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.501502 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115ecd43-9912-4bf4-933f-4fa0497f0a9d-operator-scripts\") pod \"barbican-db-create-4zfxn\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.502499 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-mq627"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.506364 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.512252 4708 scope.go:117] "RemoveContainer" containerID="cbca860b738855f082dafc10482664274da1c261c7c04647e10b6647a4499dee" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.512696 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.513291 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mq627"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.528087 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-7f00-account-create-update-g5jdv"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.540104 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.548146 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-db-secret" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.553512 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.554416 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-7f00-account-create-update-g5jdv"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.571523 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.578627 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.586868 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-nv8ss"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602492 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r265b\" (UniqueName: \"kubernetes.io/projected/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-kube-api-access-r265b\") pod \"neutron-db-create-mq627\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602540 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e56c4e-da77-42ed-b415-fafbb5e465ca-operator-scripts\") pod \"cloudkitty-7f00-account-create-update-g5jdv\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602573 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46zch\" (UniqueName: \"kubernetes.io/projected/115ecd43-9912-4bf4-933f-4fa0497f0a9d-kube-api-access-46zch\") pod \"barbican-db-create-4zfxn\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602619 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbc7j\" (UniqueName: \"kubernetes.io/projected/d0e56c4e-da77-42ed-b415-fafbb5e465ca-kube-api-access-zbc7j\") pod \"cloudkitty-7f00-account-create-update-g5jdv\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602641 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flr2x\" (UniqueName: \"kubernetes.io/projected/87b33cfd-36db-424a-9225-a9a35b8a8562-kube-api-access-flr2x\") pod \"barbican-2006-account-create-update-njkx2\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602686 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b33cfd-36db-424a-9225-a9a35b8a8562-operator-scripts\") pod \"barbican-2006-account-create-update-njkx2\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602705 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-operator-scripts\") pod \"neutron-db-create-mq627\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.602724 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115ecd43-9912-4bf4-933f-4fa0497f0a9d-operator-scripts\") pod \"barbican-db-create-4zfxn\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.603602 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115ecd43-9912-4bf4-933f-4fa0497f0a9d-operator-scripts\") pod \"barbican-db-create-4zfxn\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.604119 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b33cfd-36db-424a-9225-a9a35b8a8562-operator-scripts\") pod \"barbican-2006-account-create-update-njkx2\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.611210 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.613551 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.616380 4708 scope.go:117] "RemoveContainer" containerID="3833339ae9a1512b80609665e99a753ad63e0b74dff9ef6306f93413e6a2d44e" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.616577 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.617374 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-1" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.617820 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.618613 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"prometheus-metric-storage-rulefiles-2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.618895 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-web-config" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.618932 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-thanos-prometheus-http-client-file" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.619835 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-metric-storage-prometheus-svc" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.619926 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"metric-storage-prometheus-dockercfg-j5w7w" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.624820 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"prometheus-metric-storage-tls-assets-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.626012 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flr2x\" (UniqueName: \"kubernetes.io/projected/87b33cfd-36db-424a-9225-a9a35b8a8562-kube-api-access-flr2x\") pod \"barbican-2006-account-create-update-njkx2\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.633644 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46zch\" (UniqueName: \"kubernetes.io/projected/115ecd43-9912-4bf4-933f-4fa0497f0a9d-kube-api-access-46zch\") pod \"barbican-db-create-4zfxn\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.634123 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.634684 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.674684 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.705923 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r265b\" (UniqueName: \"kubernetes.io/projected/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-kube-api-access-r265b\") pod \"neutron-db-create-mq627\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.705973 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e56c4e-da77-42ed-b415-fafbb5e465ca-operator-scripts\") pod \"cloudkitty-7f00-account-create-update-g5jdv\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.706042 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zbc7j\" (UniqueName: \"kubernetes.io/projected/d0e56c4e-da77-42ed-b415-fafbb5e465ca-kube-api-access-zbc7j\") pod \"cloudkitty-7f00-account-create-update-g5jdv\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.706102 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-operator-scripts\") pod \"neutron-db-create-mq627\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.707119 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-operator-scripts\") pod \"neutron-db-create-mq627\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.711619 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e56c4e-da77-42ed-b415-fafbb5e465ca-operator-scripts\") pod \"cloudkitty-7f00-account-create-update-g5jdv\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.726559 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r265b\" (UniqueName: \"kubernetes.io/projected/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-kube-api-access-r265b\") pod \"neutron-db-create-mq627\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.743450 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zbc7j\" (UniqueName: \"kubernetes.io/projected/d0e56c4e-da77-42ed-b415-fafbb5e465ca-kube-api-access-zbc7j\") pod \"cloudkitty-7f00-account-create-update-g5jdv\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.794195 4708 scope.go:117] "RemoveContainer" containerID="2e7a064c26d17a9d34a9bfa4a83396738620cb57acbf204cdfae3a3489c41b06" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.800373 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.807788 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.807943 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.807992 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5107d3e0-ea93-4d89-b36c-f726b481e0e0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808024 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808042 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mw586\" (UniqueName: \"kubernetes.io/projected/5107d3e0-ea93-4d89-b36c-f726b481e0e0-kube-api-access-mw586\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808076 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808147 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808181 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808203 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808256 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808281 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-config\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808310 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.808330 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5107d3e0-ea93-4d89-b36c-f726b481e0e0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910410 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5107d3e0-ea93-4d89-b36c-f726b481e0e0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910483 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910512 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mw586\" (UniqueName: \"kubernetes.io/projected/5107d3e0-ea93-4d89-b36c-f726b481e0e0-kube-api-access-mw586\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910539 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910631 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910671 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910707 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910732 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910753 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-config\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910783 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910807 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5107d3e0-ea93-4d89-b36c-f726b481e0e0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910866 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.910900 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.916639 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/5107d3e0-ea93-4d89-b36c-f726b481e0e0-tls-assets\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.918611 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-1\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.919029 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-secret-combined-ca-bundle\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.919096 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-2\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.920016 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.920044 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6a3d85c1c1fcaae45da21a4ce37501d7d698227fff3b451bbf342800bd1947c3/globalmount\"" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.922858 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-metric-storage-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/5107d3e0-ea93-4d89-b36c-f726b481e0e0-prometheus-metric-storage-rulefiles-0\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.923746 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/5107d3e0-ea93-4d89-b36c-f726b481e0e0-config-out\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.925441 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.931025 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config-tls-secret-cert-cert-metric-storage-prometh-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.931645 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mq627" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.932533 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-thanos-prometheus-http-client-file\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.933101 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-config\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.938260 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\" (UniqueName: \"kubernetes.io/secret/5107d3e0-ea93-4d89-b36c-f726b481e0e0-web-config-tls-secret-key-cert-metric-storage-promethe-dc638c2d\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:48 crc kubenswrapper[4708]: I0227 17:15:48.938298 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mw586\" (UniqueName: \"kubernetes.io/projected/5107d3e0-ea93-4d89-b36c-f726b481e0e0-kube-api-access-mw586\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.012187 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b47fd57a-88c3-4e6f-8955-17be4bc30c42\") pod \"prometheus-metric-storage-0\" (UID: \"5107d3e0-ea93-4d89-b36c-f726b481e0e0\") " pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.163302 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.217549 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kltvr\" (UniqueName: \"kubernetes.io/projected/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-kube-api-access-kltvr\") pod \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.217713 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-operator-scripts\") pod \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\" (UID: \"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1\") " Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.219081 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18c1e66f-ed02-4bf8-be04-bf5d722eb5a1" (UID: "18c1e66f-ed02-4bf8-be04-bf5d722eb5a1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.227177 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-kube-api-access-kltvr" (OuterVolumeSpecName: "kube-api-access-kltvr") pod "18c1e66f-ed02-4bf8-be04-bf5d722eb5a1" (UID: "18c1e66f-ed02-4bf8-be04-bf5d722eb5a1"). InnerVolumeSpecName "kube-api-access-kltvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.265016 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/prometheus-metric-storage-0" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.279002 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-kqhws"] Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.319839 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.319877 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kltvr\" (UniqueName: \"kubernetes.io/projected/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1-kube-api-access-kltvr\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.326997 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-create-qdbv7"] Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.354503 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xjljq" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.421210 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-qdbv7" event={"ID":"f76c9acf-0333-4355-9c57-46fd59f26866","Type":"ContainerStarted","Data":"ef28bfa4d7f8cd6d818b8516dd8a7295c04a5666660624be463a3059cd9cf1b2"} Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.421477 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jfvl\" (UniqueName: \"kubernetes.io/projected/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-kube-api-access-7jfvl\") pod \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.421582 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-operator-scripts\") pod \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\" (UID: \"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a\") " Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.423231 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nv8ss" event={"ID":"6bcf0e9a-a14c-4b1f-8406-22719bee5979","Type":"ContainerStarted","Data":"f82bdd7cb361ac7b8a1d9c4ea602ba899e970fab3d9c02acfdb38f6ac5886210"} Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.423281 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nv8ss" event={"ID":"6bcf0e9a-a14c-4b1f-8406-22719bee5979","Type":"ContainerStarted","Data":"656445a30a4edf50ff1fce4e4143ec8072ddf528a502f80d7b0a0d26c54e4b99"} Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.424397 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a" (UID: "a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.428659 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-27ba-account-create-update-k2rtk" event={"ID":"18c1e66f-ed02-4bf8-be04-bf5d722eb5a1","Type":"ContainerDied","Data":"04e1939c6cd1ba5924ba36745a36175f60608be2c062e301dad64d2c02c2b4f6"} Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.428704 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04e1939c6cd1ba5924ba36745a36175f60608be2c062e301dad64d2c02c2b4f6" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.428774 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-27ba-account-create-update-k2rtk" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.438510 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-kube-api-access-7jfvl" (OuterVolumeSpecName: "kube-api-access-7jfvl") pod "a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a" (UID: "a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a"). InnerVolumeSpecName "kube-api-access-7jfvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.447824 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-nv8ss" podStartSLOduration=2.447808634 podStartE2EDuration="2.447808634s" podCreationTimestamp="2026-02-27 17:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:49.446372073 +0000 UTC m=+1347.962169660" watchObservedRunningTime="2026-02-27 17:15:49.447808634 +0000 UTC m=+1347.963606221" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.477323 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-xjljq" event={"ID":"a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a","Type":"ContainerDied","Data":"535fd3798b0fc3cea4322b245f9f4949338ae6a233d40747c9bc5009edc1f93f"} Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.477340 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-xjljq" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.477360 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="535fd3798b0fc3cea4322b245f9f4949338ae6a233d40747c9bc5009edc1f93f" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.486385 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kqhws" event={"ID":"b1bfca09-eb7d-485b-97b2-84ba0df72b73","Type":"ContainerStarted","Data":"06dfad7a8a124c624b267f2cc161e2faf3f9476d395ce605b80c1ffe90d7e6f5"} Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.525007 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jfvl\" (UniqueName: \"kubernetes.io/projected/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-kube-api-access-7jfvl\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.525304 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.835021 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-7abe-account-create-update-649dm"] Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.842164 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-4zfxn"] Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.849909 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-136d-account-create-update-pwh4j"] Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.862599 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-h7hx9"] Feb 27 17:15:49 crc kubenswrapper[4708]: I0227 17:15:49.951877 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-2006-account-create-update-njkx2"] Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.010577 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-7f00-account-create-update-g5jdv"] Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.037283 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-mq627"] Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.064899 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/prometheus-metric-storage-0"] Feb 27 17:15:50 crc kubenswrapper[4708]: W0227 17:15:50.147377 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5107d3e0_ea93_4d89_b36c_f726b481e0e0.slice/crio-d472cafd48a27baa96ea9e779851557cd142e95b2ef3bdbe385d80bb576dd5eb WatchSource:0}: Error finding container d472cafd48a27baa96ea9e779851557cd142e95b2ef3bdbe385d80bb576dd5eb: Status 404 returned error can't find the container with id d472cafd48a27baa96ea9e779851557cd142e95b2ef3bdbe385d80bb576dd5eb Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.243715 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c129cc00-13ca-4502-aa1b-866133b164a9" path="/var/lib/kubelet/pods/c129cc00-13ca-4502-aa1b-866133b164a9/volumes" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.514863 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2006-account-create-update-njkx2" event={"ID":"87b33cfd-36db-424a-9225-a9a35b8a8562","Type":"ContainerStarted","Data":"44ac8eace8c1d1a0ca0423e4014961aed9d702ceef601443d3384a82e7e54dae"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.514964 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2006-account-create-update-njkx2" event={"ID":"87b33cfd-36db-424a-9225-a9a35b8a8562","Type":"ContainerStarted","Data":"58a66af9581c1335287b5a622d1b91b6f2fd5bb3245ebd89fe8e7ef869897185"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.517127 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4zfxn" event={"ID":"115ecd43-9912-4bf4-933f-4fa0497f0a9d","Type":"ContainerStarted","Data":"587e33d4a94b849a493d60e4cc751a09ce0312dc666379edc1841c72a80fd9af"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.517160 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4zfxn" event={"ID":"115ecd43-9912-4bf4-933f-4fa0497f0a9d","Type":"ContainerStarted","Data":"ce5fdb66cf65a65adf86115face367df7f9d75c3b93ab9038688af3071f15dca"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.519576 4708 generic.go:334] "Generic (PLEG): container finished" podID="b1bfca09-eb7d-485b-97b2-84ba0df72b73" containerID="a4a9627741308a5ac9af33acda8a9ae894dd7c360d713237cf2e2733ed78cc23" exitCode=0 Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.519674 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kqhws" event={"ID":"b1bfca09-eb7d-485b-97b2-84ba0df72b73","Type":"ContainerDied","Data":"a4a9627741308a5ac9af33acda8a9ae894dd7c360d713237cf2e2733ed78cc23"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.521664 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mq627" event={"ID":"c4c4ff25-5692-417b-bd4c-53fb2cbedba7","Type":"ContainerStarted","Data":"3d85cd546ad48a469ad1ea6205c60fab34ec9a955e111ec6b140332b7354fb29"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.521821 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mq627" event={"ID":"c4c4ff25-5692-417b-bd4c-53fb2cbedba7","Type":"ContainerStarted","Data":"0321cf80ff8cc5a2b326d53492320f0af793d854396262300b6e20301ab60ffe"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.523083 4708 generic.go:334] "Generic (PLEG): container finished" podID="6bcf0e9a-a14c-4b1f-8406-22719bee5979" containerID="f82bdd7cb361ac7b8a1d9c4ea602ba899e970fab3d9c02acfdb38f6ac5886210" exitCode=0 Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.523112 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nv8ss" event={"ID":"6bcf0e9a-a14c-4b1f-8406-22719bee5979","Type":"ContainerDied","Data":"f82bdd7cb361ac7b8a1d9c4ea602ba899e970fab3d9c02acfdb38f6ac5886210"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.524146 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5107d3e0-ea93-4d89-b36c-f726b481e0e0","Type":"ContainerStarted","Data":"d472cafd48a27baa96ea9e779851557cd142e95b2ef3bdbe385d80bb576dd5eb"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.525433 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" event={"ID":"d0e56c4e-da77-42ed-b415-fafbb5e465ca","Type":"ContainerStarted","Data":"ecb5c8e9128522465d626c7afdfb4930b001dc764c3c6b28fefb8bd22ab39fb2"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.525513 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" event={"ID":"d0e56c4e-da77-42ed-b415-fafbb5e465ca","Type":"ContainerStarted","Data":"ce1712c0d94086440de2ff6c217deb613a5a3674408e9c93ab05abb4a9efeff1"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.527276 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-136d-account-create-update-pwh4j" event={"ID":"76927836-595f-41d2-ba31-e1e4de928b09","Type":"ContainerStarted","Data":"736083f55640d67dd3b6a8270f0a0a9078855d0ba9950b60d2cfe4dab09bce00"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.527330 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-136d-account-create-update-pwh4j" event={"ID":"76927836-595f-41d2-ba31-e1e4de928b09","Type":"ContainerStarted","Data":"042cca8816f93148295a8bbfedf8f4391518b529c72f9dbd9b85c1eede58bf88"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.536641 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7abe-account-create-update-649dm" event={"ID":"d3571f1c-e23d-479d-aceb-d1b79d5b1de0","Type":"ContainerStarted","Data":"78747c6be1f230887aa936015160311d40ba3ed105c9f843c1a8d329e43c6b45"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.536758 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7abe-account-create-update-649dm" event={"ID":"d3571f1c-e23d-479d-aceb-d1b79d5b1de0","Type":"ContainerStarted","Data":"372b579a7e4da1585dcd8acd151204094774213ac9999732e77a586a525a05d3"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.544461 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7hx9" event={"ID":"988145b2-7dc5-4a8e-8206-bf03ab36fb2a","Type":"ContainerStarted","Data":"464c522bdfdd4f24d061688e2f1d1277c9fc4750ecc91063d331bf6f1bd934ef"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.547563 4708 generic.go:334] "Generic (PLEG): container finished" podID="f76c9acf-0333-4355-9c57-46fd59f26866" containerID="a46c4b1cf298a17161bf28bf4756effcdb2d1c2d319f1bc4e47979a772377343" exitCode=0 Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.547634 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-qdbv7" event={"ID":"f76c9acf-0333-4355-9c57-46fd59f26866","Type":"ContainerDied","Data":"a46c4b1cf298a17161bf28bf4756effcdb2d1c2d319f1bc4e47979a772377343"} Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.557764 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-2006-account-create-update-njkx2" podStartSLOduration=2.5577458330000002 podStartE2EDuration="2.557745833s" podCreationTimestamp="2026-02-27 17:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:50.532018583 +0000 UTC m=+1349.047816160" watchObservedRunningTime="2026-02-27 17:15:50.557745833 +0000 UTC m=+1349.073543420" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.579654 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-mq627" podStartSLOduration=2.579636824 podStartE2EDuration="2.579636824s" podCreationTimestamp="2026-02-27 17:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:50.574873119 +0000 UTC m=+1349.090670706" watchObservedRunningTime="2026-02-27 17:15:50.579636824 +0000 UTC m=+1349.095434411" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.604188 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" podStartSLOduration=2.60416966 podStartE2EDuration="2.60416966s" podCreationTimestamp="2026-02-27 17:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:50.596452021 +0000 UTC m=+1349.112249608" watchObservedRunningTime="2026-02-27 17:15:50.60416966 +0000 UTC m=+1349.119967247" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.652949 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-136d-account-create-update-pwh4j" podStartSLOduration=2.652932083 podStartE2EDuration="2.652932083s" podCreationTimestamp="2026-02-27 17:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:50.651233085 +0000 UTC m=+1349.167030672" watchObservedRunningTime="2026-02-27 17:15:50.652932083 +0000 UTC m=+1349.168729660" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.678126 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-4zfxn" podStartSLOduration=2.678109928 podStartE2EDuration="2.678109928s" podCreationTimestamp="2026-02-27 17:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:50.673716283 +0000 UTC m=+1349.189513870" watchObservedRunningTime="2026-02-27 17:15:50.678109928 +0000 UTC m=+1349.193907515" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.694651 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-7abe-account-create-update-649dm" podStartSLOduration=3.694628616 podStartE2EDuration="3.694628616s" podCreationTimestamp="2026-02-27 17:15:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:15:50.689522541 +0000 UTC m=+1349.205320128" watchObservedRunningTime="2026-02-27 17:15:50.694628616 +0000 UTC m=+1349.210426203" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.743082 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-ql5zj"] Feb 27 17:15:50 crc kubenswrapper[4708]: E0227 17:15:50.743746 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18c1e66f-ed02-4bf8-be04-bf5d722eb5a1" containerName="mariadb-account-create-update" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.743768 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="18c1e66f-ed02-4bf8-be04-bf5d722eb5a1" containerName="mariadb-account-create-update" Feb 27 17:15:50 crc kubenswrapper[4708]: E0227 17:15:50.743822 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a" containerName="mariadb-database-create" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.743832 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a" containerName="mariadb-database-create" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.744071 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="18c1e66f-ed02-4bf8-be04-bf5d722eb5a1" containerName="mariadb-account-create-update" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.744100 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a" containerName="mariadb-database-create" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.745092 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.747740 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.747803 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8v89l" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.750196 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ql5zj"] Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.756792 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-config-data\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.756875 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-db-sync-config-data\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.756955 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-combined-ca-bundle\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.756998 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bf5w\" (UniqueName: \"kubernetes.io/projected/aee9dccb-4475-404d-b169-496cc3ae6a2b-kube-api-access-6bf5w\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.857538 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-db-sync-config-data\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.857609 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-combined-ca-bundle\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.857644 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bf5w\" (UniqueName: \"kubernetes.io/projected/aee9dccb-4475-404d-b169-496cc3ae6a2b-kube-api-access-6bf5w\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.857716 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-config-data\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.862988 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-db-sync-config-data\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.863367 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-combined-ca-bundle\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.863439 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-config-data\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:50 crc kubenswrapper[4708]: I0227 17:15:50.877106 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bf5w\" (UniqueName: \"kubernetes.io/projected/aee9dccb-4475-404d-b169-496cc3ae6a2b-kube-api-access-6bf5w\") pod \"glance-db-sync-ql5zj\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.091313 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ql5zj" Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.560921 4708 generic.go:334] "Generic (PLEG): container finished" podID="d3571f1c-e23d-479d-aceb-d1b79d5b1de0" containerID="78747c6be1f230887aa936015160311d40ba3ed105c9f843c1a8d329e43c6b45" exitCode=0 Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.561061 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7abe-account-create-update-649dm" event={"ID":"d3571f1c-e23d-479d-aceb-d1b79d5b1de0","Type":"ContainerDied","Data":"78747c6be1f230887aa936015160311d40ba3ed105c9f843c1a8d329e43c6b45"} Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.563990 4708 generic.go:334] "Generic (PLEG): container finished" podID="115ecd43-9912-4bf4-933f-4fa0497f0a9d" containerID="587e33d4a94b849a493d60e4cc751a09ce0312dc666379edc1841c72a80fd9af" exitCode=0 Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.564068 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4zfxn" event={"ID":"115ecd43-9912-4bf4-933f-4fa0497f0a9d","Type":"ContainerDied","Data":"587e33d4a94b849a493d60e4cc751a09ce0312dc666379edc1841c72a80fd9af"} Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.566656 4708 generic.go:334] "Generic (PLEG): container finished" podID="c4c4ff25-5692-417b-bd4c-53fb2cbedba7" containerID="3d85cd546ad48a469ad1ea6205c60fab34ec9a955e111ec6b140332b7354fb29" exitCode=0 Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.566737 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mq627" event={"ID":"c4c4ff25-5692-417b-bd4c-53fb2cbedba7","Type":"ContainerDied","Data":"3d85cd546ad48a469ad1ea6205c60fab34ec9a955e111ec6b140332b7354fb29"} Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.575037 4708 generic.go:334] "Generic (PLEG): container finished" podID="d0e56c4e-da77-42ed-b415-fafbb5e465ca" containerID="ecb5c8e9128522465d626c7afdfb4930b001dc764c3c6b28fefb8bd22ab39fb2" exitCode=0 Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.575166 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" event={"ID":"d0e56c4e-da77-42ed-b415-fafbb5e465ca","Type":"ContainerDied","Data":"ecb5c8e9128522465d626c7afdfb4930b001dc764c3c6b28fefb8bd22ab39fb2"} Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.577755 4708 generic.go:334] "Generic (PLEG): container finished" podID="87b33cfd-36db-424a-9225-a9a35b8a8562" containerID="44ac8eace8c1d1a0ca0423e4014961aed9d702ceef601443d3384a82e7e54dae" exitCode=0 Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.577812 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2006-account-create-update-njkx2" event={"ID":"87b33cfd-36db-424a-9225-a9a35b8a8562","Type":"ContainerDied","Data":"44ac8eace8c1d1a0ca0423e4014961aed9d702ceef601443d3384a82e7e54dae"} Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.593527 4708 generic.go:334] "Generic (PLEG): container finished" podID="76927836-595f-41d2-ba31-e1e4de928b09" containerID="736083f55640d67dd3b6a8270f0a0a9078855d0ba9950b60d2cfe4dab09bce00" exitCode=0 Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.593609 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-136d-account-create-update-pwh4j" event={"ID":"76927836-595f-41d2-ba31-e1e4de928b09","Type":"ContainerDied","Data":"736083f55640d67dd3b6a8270f0a0a9078855d0ba9950b60d2cfe4dab09bce00"} Feb 27 17:15:51 crc kubenswrapper[4708]: I0227 17:15:51.738727 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-ql5zj"] Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.028308 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.109485 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.113553 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.179394 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76c9acf-0333-4355-9c57-46fd59f26866-operator-scripts\") pod \"f76c9acf-0333-4355-9c57-46fd59f26866\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.179539 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75s5q\" (UniqueName: \"kubernetes.io/projected/f76c9acf-0333-4355-9c57-46fd59f26866-kube-api-access-75s5q\") pod \"f76c9acf-0333-4355-9c57-46fd59f26866\" (UID: \"f76c9acf-0333-4355-9c57-46fd59f26866\") " Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.181415 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f76c9acf-0333-4355-9c57-46fd59f26866-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "f76c9acf-0333-4355-9c57-46fd59f26866" (UID: "f76c9acf-0333-4355-9c57-46fd59f26866"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.188506 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76c9acf-0333-4355-9c57-46fd59f26866-kube-api-access-75s5q" (OuterVolumeSpecName: "kube-api-access-75s5q") pod "f76c9acf-0333-4355-9c57-46fd59f26866" (UID: "f76c9acf-0333-4355-9c57-46fd59f26866"). InnerVolumeSpecName "kube-api-access-75s5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.281342 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6n9bm\" (UniqueName: \"kubernetes.io/projected/b1bfca09-eb7d-485b-97b2-84ba0df72b73-kube-api-access-6n9bm\") pod \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.282065 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-txrqs\" (UniqueName: \"kubernetes.io/projected/6bcf0e9a-a14c-4b1f-8406-22719bee5979-kube-api-access-txrqs\") pod \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.282300 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcf0e9a-a14c-4b1f-8406-22719bee5979-operator-scripts\") pod \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\" (UID: \"6bcf0e9a-a14c-4b1f-8406-22719bee5979\") " Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.282400 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bfca09-eb7d-485b-97b2-84ba0df72b73-operator-scripts\") pod \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\" (UID: \"b1bfca09-eb7d-485b-97b2-84ba0df72b73\") " Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.283948 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f76c9acf-0333-4355-9c57-46fd59f26866-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.283966 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75s5q\" (UniqueName: \"kubernetes.io/projected/f76c9acf-0333-4355-9c57-46fd59f26866-kube-api-access-75s5q\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.284698 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bcf0e9a-a14c-4b1f-8406-22719bee5979-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6bcf0e9a-a14c-4b1f-8406-22719bee5979" (UID: "6bcf0e9a-a14c-4b1f-8406-22719bee5979"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.284972 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1bfca09-eb7d-485b-97b2-84ba0df72b73-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b1bfca09-eb7d-485b-97b2-84ba0df72b73" (UID: "b1bfca09-eb7d-485b-97b2-84ba0df72b73"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.287550 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bcf0e9a-a14c-4b1f-8406-22719bee5979-kube-api-access-txrqs" (OuterVolumeSpecName: "kube-api-access-txrqs") pod "6bcf0e9a-a14c-4b1f-8406-22719bee5979" (UID: "6bcf0e9a-a14c-4b1f-8406-22719bee5979"). InnerVolumeSpecName "kube-api-access-txrqs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.287584 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1bfca09-eb7d-485b-97b2-84ba0df72b73-kube-api-access-6n9bm" (OuterVolumeSpecName: "kube-api-access-6n9bm") pod "b1bfca09-eb7d-485b-97b2-84ba0df72b73" (UID: "b1bfca09-eb7d-485b-97b2-84ba0df72b73"). InnerVolumeSpecName "kube-api-access-6n9bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.384557 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6n9bm\" (UniqueName: \"kubernetes.io/projected/b1bfca09-eb7d-485b-97b2-84ba0df72b73-kube-api-access-6n9bm\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.384587 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-txrqs\" (UniqueName: \"kubernetes.io/projected/6bcf0e9a-a14c-4b1f-8406-22719bee5979-kube-api-access-txrqs\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.384597 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6bcf0e9a-a14c-4b1f-8406-22719bee5979-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.384607 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b1bfca09-eb7d-485b-97b2-84ba0df72b73-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:52 crc kubenswrapper[4708]: E0227 17:15:52.418315 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf76c9acf_0333_4355_9c57_46fd59f26866.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.613167 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-kqhws" event={"ID":"b1bfca09-eb7d-485b-97b2-84ba0df72b73","Type":"ContainerDied","Data":"06dfad7a8a124c624b267f2cc161e2faf3f9476d395ce605b80c1ffe90d7e6f5"} Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.613214 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06dfad7a8a124c624b267f2cc161e2faf3f9476d395ce605b80c1ffe90d7e6f5" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.613191 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-kqhws" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.615182 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-create-qdbv7" event={"ID":"f76c9acf-0333-4355-9c57-46fd59f26866","Type":"ContainerDied","Data":"ef28bfa4d7f8cd6d818b8516dd8a7295c04a5666660624be463a3059cd9cf1b2"} Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.615220 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef28bfa4d7f8cd6d818b8516dd8a7295c04a5666660624be463a3059cd9cf1b2" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.615265 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-create-qdbv7" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.617361 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-nv8ss" event={"ID":"6bcf0e9a-a14c-4b1f-8406-22719bee5979","Type":"ContainerDied","Data":"656445a30a4edf50ff1fce4e4143ec8072ddf528a502f80d7b0a0d26c54e4b99"} Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.617443 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="656445a30a4edf50ff1fce4e4143ec8072ddf528a502f80d7b0a0d26c54e4b99" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.617533 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-nv8ss" Feb 27 17:15:52 crc kubenswrapper[4708]: I0227 17:15:52.624984 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ql5zj" event={"ID":"aee9dccb-4475-404d-b169-496cc3ae6a2b","Type":"ContainerStarted","Data":"cf01239e0c7c8d6603ee86dbc358d3665517f288058cedef7a9a5b69223ce8fe"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.066695 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.198458 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cloudkitty-lokistack-ingester-0" podUID="238aef54-b0dd-495b-a5f8-66cc43b12088" containerName="loki-ingester" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.200542 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e56c4e-da77-42ed-b415-fafbb5e465ca-operator-scripts\") pod \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.200580 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zbc7j\" (UniqueName: \"kubernetes.io/projected/d0e56c4e-da77-42ed-b415-fafbb5e465ca-kube-api-access-zbc7j\") pod \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\" (UID: \"d0e56c4e-da77-42ed-b415-fafbb5e465ca\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.201406 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d0e56c4e-da77-42ed-b415-fafbb5e465ca-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d0e56c4e-da77-42ed-b415-fafbb5e465ca" (UID: "d0e56c4e-da77-42ed-b415-fafbb5e465ca"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.205062 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d0e56c4e-da77-42ed-b415-fafbb5e465ca-kube-api-access-zbc7j" (OuterVolumeSpecName: "kube-api-access-zbc7j") pod "d0e56c4e-da77-42ed-b415-fafbb5e465ca" (UID: "d0e56c4e-da77-42ed-b415-fafbb5e465ca"). InnerVolumeSpecName "kube-api-access-zbc7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.271725 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.276357 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.293163 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mq627" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.294829 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.305062 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d0e56c4e-da77-42ed-b415-fafbb5e465ca-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.305127 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zbc7j\" (UniqueName: \"kubernetes.io/projected/d0e56c4e-da77-42ed-b415-fafbb5e465ca-kube-api-access-zbc7j\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.319630 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.410631 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115ecd43-9912-4bf4-933f-4fa0497f0a9d-operator-scripts\") pod \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411197 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-operator-scripts\") pod \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411238 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b33cfd-36db-424a-9225-a9a35b8a8562-operator-scripts\") pod \"87b33cfd-36db-424a-9225-a9a35b8a8562\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411252 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/115ecd43-9912-4bf4-933f-4fa0497f0a9d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "115ecd43-9912-4bf4-933f-4fa0497f0a9d" (UID: "115ecd43-9912-4bf4-933f-4fa0497f0a9d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411306 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r265b\" (UniqueName: \"kubernetes.io/projected/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-kube-api-access-r265b\") pod \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411366 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8mzp\" (UniqueName: \"kubernetes.io/projected/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-kube-api-access-n8mzp\") pod \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\" (UID: \"d3571f1c-e23d-479d-aceb-d1b79d5b1de0\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411455 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46zch\" (UniqueName: \"kubernetes.io/projected/115ecd43-9912-4bf4-933f-4fa0497f0a9d-kube-api-access-46zch\") pod \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\" (UID: \"115ecd43-9912-4bf4-933f-4fa0497f0a9d\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411579 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flr2x\" (UniqueName: \"kubernetes.io/projected/87b33cfd-36db-424a-9225-a9a35b8a8562-kube-api-access-flr2x\") pod \"87b33cfd-36db-424a-9225-a9a35b8a8562\" (UID: \"87b33cfd-36db-424a-9225-a9a35b8a8562\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411781 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-operator-scripts\") pod \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\" (UID: \"c4c4ff25-5692-417b-bd4c-53fb2cbedba7\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.411933 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87b33cfd-36db-424a-9225-a9a35b8a8562-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "87b33cfd-36db-424a-9225-a9a35b8a8562" (UID: "87b33cfd-36db-424a-9225-a9a35b8a8562"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.412708 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "c4c4ff25-5692-417b-bd4c-53fb2cbedba7" (UID: "c4c4ff25-5692-417b-bd4c-53fb2cbedba7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.413071 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d3571f1c-e23d-479d-aceb-d1b79d5b1de0" (UID: "d3571f1c-e23d-479d-aceb-d1b79d5b1de0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.413647 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.413674 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/115ecd43-9912-4bf4-933f-4fa0497f0a9d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.413684 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.413695 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/87b33cfd-36db-424a-9225-a9a35b8a8562-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.417535 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/115ecd43-9912-4bf4-933f-4fa0497f0a9d-kube-api-access-46zch" (OuterVolumeSpecName: "kube-api-access-46zch") pod "115ecd43-9912-4bf4-933f-4fa0497f0a9d" (UID: "115ecd43-9912-4bf4-933f-4fa0497f0a9d"). InnerVolumeSpecName "kube-api-access-46zch". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.417796 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87b33cfd-36db-424a-9225-a9a35b8a8562-kube-api-access-flr2x" (OuterVolumeSpecName: "kube-api-access-flr2x") pod "87b33cfd-36db-424a-9225-a9a35b8a8562" (UID: "87b33cfd-36db-424a-9225-a9a35b8a8562"). InnerVolumeSpecName "kube-api-access-flr2x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.419193 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-kube-api-access-r265b" (OuterVolumeSpecName: "kube-api-access-r265b") pod "c4c4ff25-5692-417b-bd4c-53fb2cbedba7" (UID: "c4c4ff25-5692-417b-bd4c-53fb2cbedba7"). InnerVolumeSpecName "kube-api-access-r265b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.419427 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-kube-api-access-n8mzp" (OuterVolumeSpecName: "kube-api-access-n8mzp") pod "d3571f1c-e23d-479d-aceb-d1b79d5b1de0" (UID: "d3571f1c-e23d-479d-aceb-d1b79d5b1de0"). InnerVolumeSpecName "kube-api-access-n8mzp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.514886 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76927836-595f-41d2-ba31-e1e4de928b09-operator-scripts\") pod \"76927836-595f-41d2-ba31-e1e4de928b09\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.515080 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzzgh\" (UniqueName: \"kubernetes.io/projected/76927836-595f-41d2-ba31-e1e4de928b09-kube-api-access-kzzgh\") pod \"76927836-595f-41d2-ba31-e1e4de928b09\" (UID: \"76927836-595f-41d2-ba31-e1e4de928b09\") " Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.515504 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r265b\" (UniqueName: \"kubernetes.io/projected/c4c4ff25-5692-417b-bd4c-53fb2cbedba7-kube-api-access-r265b\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.515523 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8mzp\" (UniqueName: \"kubernetes.io/projected/d3571f1c-e23d-479d-aceb-d1b79d5b1de0-kube-api-access-n8mzp\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.515534 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46zch\" (UniqueName: \"kubernetes.io/projected/115ecd43-9912-4bf4-933f-4fa0497f0a9d-kube-api-access-46zch\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.515544 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flr2x\" (UniqueName: \"kubernetes.io/projected/87b33cfd-36db-424a-9225-a9a35b8a8562-kube-api-access-flr2x\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.515657 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76927836-595f-41d2-ba31-e1e4de928b09-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "76927836-595f-41d2-ba31-e1e4de928b09" (UID: "76927836-595f-41d2-ba31-e1e4de928b09"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.519460 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76927836-595f-41d2-ba31-e1e4de928b09-kube-api-access-kzzgh" (OuterVolumeSpecName: "kube-api-access-kzzgh") pod "76927836-595f-41d2-ba31-e1e4de928b09" (UID: "76927836-595f-41d2-ba31-e1e4de928b09"). InnerVolumeSpecName "kube-api-access-kzzgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.617948 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzzgh\" (UniqueName: \"kubernetes.io/projected/76927836-595f-41d2-ba31-e1e4de928b09-kube-api-access-kzzgh\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.617998 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/76927836-595f-41d2-ba31-e1e4de928b09-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.647700 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-136d-account-create-update-pwh4j" event={"ID":"76927836-595f-41d2-ba31-e1e4de928b09","Type":"ContainerDied","Data":"042cca8816f93148295a8bbfedf8f4391518b529c72f9dbd9b85c1eede58bf88"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.647783 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="042cca8816f93148295a8bbfedf8f4391518b529c72f9dbd9b85c1eede58bf88" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.648982 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-136d-account-create-update-pwh4j" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.659362 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-7abe-account-create-update-649dm" event={"ID":"d3571f1c-e23d-479d-aceb-d1b79d5b1de0","Type":"ContainerDied","Data":"372b579a7e4da1585dcd8acd151204094774213ac9999732e77a586a525a05d3"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.660467 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="372b579a7e4da1585dcd8acd151204094774213ac9999732e77a586a525a05d3" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.659394 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-7abe-account-create-update-649dm" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.666569 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-4zfxn" event={"ID":"115ecd43-9912-4bf4-933f-4fa0497f0a9d","Type":"ContainerDied","Data":"ce5fdb66cf65a65adf86115face367df7f9d75c3b93ab9038688af3071f15dca"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.666594 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce5fdb66cf65a65adf86115face367df7f9d75c3b93ab9038688af3071f15dca" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.666702 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-4zfxn" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.669956 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-mq627" event={"ID":"c4c4ff25-5692-417b-bd4c-53fb2cbedba7","Type":"ContainerDied","Data":"0321cf80ff8cc5a2b326d53492320f0af793d854396262300b6e20301ab60ffe"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.669979 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0321cf80ff8cc5a2b326d53492320f0af793d854396262300b6e20301ab60ffe" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.670040 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-mq627" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.681807 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5107d3e0-ea93-4d89-b36c-f726b481e0e0","Type":"ContainerStarted","Data":"099aeb8d257ba7786ef66cfb0741dc5a3d6043d5672c0eb78a4e2ae1a427cdce"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.691068 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" event={"ID":"d0e56c4e-da77-42ed-b415-fafbb5e465ca","Type":"ContainerDied","Data":"ce1712c0d94086440de2ff6c217deb613a5a3674408e9c93ab05abb4a9efeff1"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.691130 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce1712c0d94086440de2ff6c217deb613a5a3674408e9c93ab05abb4a9efeff1" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.691238 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-7f00-account-create-update-g5jdv" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.695734 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-2006-account-create-update-njkx2" event={"ID":"87b33cfd-36db-424a-9225-a9a35b8a8562","Type":"ContainerDied","Data":"58a66af9581c1335287b5a622d1b91b6f2fd5bb3245ebd89fe8e7ef869897185"} Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.695772 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58a66af9581c1335287b5a622d1b91b6f2fd5bb3245ebd89fe8e7ef869897185" Feb 27 17:15:53 crc kubenswrapper[4708]: I0227 17:15:53.695823 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-2006-account-create-update-njkx2" Feb 27 17:15:56 crc kubenswrapper[4708]: I0227 17:15:56.739922 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7hx9" event={"ID":"988145b2-7dc5-4a8e-8206-bf03ab36fb2a","Type":"ContainerStarted","Data":"21e6f4b20cfaf2460b5f143c973f2b856e812b7c767c23c3880a0d5d167333ef"} Feb 27 17:15:56 crc kubenswrapper[4708]: I0227 17:15:56.768566 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-h7hx9" podStartSLOduration=2.675982038 podStartE2EDuration="8.768538161s" podCreationTimestamp="2026-02-27 17:15:48 +0000 UTC" firstStartedPulling="2026-02-27 17:15:49.886960633 +0000 UTC m=+1348.402758220" lastFinishedPulling="2026-02-27 17:15:55.979516756 +0000 UTC m=+1354.495314343" observedRunningTime="2026-02-27 17:15:56.759789603 +0000 UTC m=+1355.275587210" watchObservedRunningTime="2026-02-27 17:15:56.768538161 +0000 UTC m=+1355.284335748" Feb 27 17:15:59 crc kubenswrapper[4708]: I0227 17:15:59.770383 4708 generic.go:334] "Generic (PLEG): container finished" podID="988145b2-7dc5-4a8e-8206-bf03ab36fb2a" containerID="21e6f4b20cfaf2460b5f143c973f2b856e812b7c767c23c3880a0d5d167333ef" exitCode=0 Feb 27 17:15:59 crc kubenswrapper[4708]: I0227 17:15:59.770430 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7hx9" event={"ID":"988145b2-7dc5-4a8e-8206-bf03ab36fb2a","Type":"ContainerDied","Data":"21e6f4b20cfaf2460b5f143c973f2b856e812b7c767c23c3880a0d5d167333ef"} Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.153972 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536876-xtn7r"] Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.154808 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d0e56c4e-da77-42ed-b415-fafbb5e465ca" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.154834 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d0e56c4e-da77-42ed-b415-fafbb5e465ca" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.154888 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76c9acf-0333-4355-9c57-46fd59f26866" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.154902 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76c9acf-0333-4355-9c57-46fd59f26866" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.154925 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6bcf0e9a-a14c-4b1f-8406-22719bee5979" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.154939 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6bcf0e9a-a14c-4b1f-8406-22719bee5979" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.154975 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1bfca09-eb7d-485b-97b2-84ba0df72b73" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.154988 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1bfca09-eb7d-485b-97b2-84ba0df72b73" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.155004 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="115ecd43-9912-4bf4-933f-4fa0497f0a9d" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155016 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="115ecd43-9912-4bf4-933f-4fa0497f0a9d" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.155039 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76927836-595f-41d2-ba31-e1e4de928b09" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155051 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="76927836-595f-41d2-ba31-e1e4de928b09" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.155066 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4c4ff25-5692-417b-bd4c-53fb2cbedba7" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155077 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4c4ff25-5692-417b-bd4c-53fb2cbedba7" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.155100 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3571f1c-e23d-479d-aceb-d1b79d5b1de0" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155113 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3571f1c-e23d-479d-aceb-d1b79d5b1de0" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: E0227 17:16:00.155134 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87b33cfd-36db-424a-9225-a9a35b8a8562" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155147 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="87b33cfd-36db-424a-9225-a9a35b8a8562" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155440 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3571f1c-e23d-479d-aceb-d1b79d5b1de0" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155457 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="87b33cfd-36db-424a-9225-a9a35b8a8562" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155477 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76c9acf-0333-4355-9c57-46fd59f26866" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155528 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="76927836-595f-41d2-ba31-e1e4de928b09" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155743 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="6bcf0e9a-a14c-4b1f-8406-22719bee5979" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155764 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d0e56c4e-da77-42ed-b415-fafbb5e465ca" containerName="mariadb-account-create-update" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155786 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1bfca09-eb7d-485b-97b2-84ba0df72b73" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155803 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="115ecd43-9912-4bf4-933f-4fa0497f0a9d" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.155828 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4c4ff25-5692-417b-bd4c-53fb2cbedba7" containerName="mariadb-database-create" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.156752 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-xtn7r" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.159619 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.159953 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.162809 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.169675 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-xtn7r"] Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.276288 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sbr4\" (UniqueName: \"kubernetes.io/projected/5534fd9b-068c-43dc-91af-a5014e8bdb24-kube-api-access-6sbr4\") pod \"auto-csr-approver-29536876-xtn7r\" (UID: \"5534fd9b-068c-43dc-91af-a5014e8bdb24\") " pod="openshift-infra/auto-csr-approver-29536876-xtn7r" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.378660 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sbr4\" (UniqueName: \"kubernetes.io/projected/5534fd9b-068c-43dc-91af-a5014e8bdb24-kube-api-access-6sbr4\") pod \"auto-csr-approver-29536876-xtn7r\" (UID: \"5534fd9b-068c-43dc-91af-a5014e8bdb24\") " pod="openshift-infra/auto-csr-approver-29536876-xtn7r" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.403268 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sbr4\" (UniqueName: \"kubernetes.io/projected/5534fd9b-068c-43dc-91af-a5014e8bdb24-kube-api-access-6sbr4\") pod \"auto-csr-approver-29536876-xtn7r\" (UID: \"5534fd9b-068c-43dc-91af-a5014e8bdb24\") " pod="openshift-infra/auto-csr-approver-29536876-xtn7r" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.500185 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-xtn7r" Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.781923 4708 generic.go:334] "Generic (PLEG): container finished" podID="5107d3e0-ea93-4d89-b36c-f726b481e0e0" containerID="099aeb8d257ba7786ef66cfb0741dc5a3d6043d5672c0eb78a4e2ae1a427cdce" exitCode=0 Feb 27 17:16:00 crc kubenswrapper[4708]: I0227 17:16:00.782076 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5107d3e0-ea93-4d89-b36c-f726b481e0e0","Type":"ContainerDied","Data":"099aeb8d257ba7786ef66cfb0741dc5a3d6043d5672c0eb78a4e2ae1a427cdce"} Feb 27 17:16:03 crc kubenswrapper[4708]: I0227 17:16:03.196229 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-lokistack-ingester-0" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.631220 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.631746 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.772804 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.845081 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-h7hx9" event={"ID":"988145b2-7dc5-4a8e-8206-bf03ab36fb2a","Type":"ContainerDied","Data":"464c522bdfdd4f24d061688e2f1d1277c9fc4750ecc91063d331bf6f1bd934ef"} Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.845119 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="464c522bdfdd4f24d061688e2f1d1277c9fc4750ecc91063d331bf6f1bd934ef" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.845181 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-h7hx9" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.886614 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgts4\" (UniqueName: \"kubernetes.io/projected/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-kube-api-access-lgts4\") pod \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.886802 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-combined-ca-bundle\") pod \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.886867 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-config-data\") pod \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\" (UID: \"988145b2-7dc5-4a8e-8206-bf03ab36fb2a\") " Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.890931 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-kube-api-access-lgts4" (OuterVolumeSpecName: "kube-api-access-lgts4") pod "988145b2-7dc5-4a8e-8206-bf03ab36fb2a" (UID: "988145b2-7dc5-4a8e-8206-bf03ab36fb2a"). InnerVolumeSpecName "kube-api-access-lgts4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.931346 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "988145b2-7dc5-4a8e-8206-bf03ab36fb2a" (UID: "988145b2-7dc5-4a8e-8206-bf03ab36fb2a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.937547 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-config-data" (OuterVolumeSpecName: "config-data") pod "988145b2-7dc5-4a8e-8206-bf03ab36fb2a" (UID: "988145b2-7dc5-4a8e-8206-bf03ab36fb2a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.988985 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgts4\" (UniqueName: \"kubernetes.io/projected/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-kube-api-access-lgts4\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.989022 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:05 crc kubenswrapper[4708]: I0227 17:16:05.989035 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/988145b2-7dc5-4a8e-8206-bf03ab36fb2a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:06 crc kubenswrapper[4708]: I0227 17:16:06.175429 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-xtn7r"] Feb 27 17:16:06 crc kubenswrapper[4708]: W0227 17:16:06.178910 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5534fd9b_068c_43dc_91af_a5014e8bdb24.slice/crio-e10e7f829be8ae05874577a74c275666ef3a72553791430cb856446695f34417 WatchSource:0}: Error finding container e10e7f829be8ae05874577a74c275666ef3a72553791430cb856446695f34417: Status 404 returned error can't find the container with id e10e7f829be8ae05874577a74c275666ef3a72553791430cb856446695f34417 Feb 27 17:16:06 crc kubenswrapper[4708]: I0227 17:16:06.866453 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536876-xtn7r" event={"ID":"5534fd9b-068c-43dc-91af-a5014e8bdb24","Type":"ContainerStarted","Data":"e10e7f829be8ae05874577a74c275666ef3a72553791430cb856446695f34417"} Feb 27 17:16:06 crc kubenswrapper[4708]: I0227 17:16:06.869477 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5107d3e0-ea93-4d89-b36c-f726b481e0e0","Type":"ContainerStarted","Data":"b770fb4ee8916f76ff32b2a29dd05d590210daa59af63deee0912553bec4e977"} Feb 27 17:16:06 crc kubenswrapper[4708]: I0227 17:16:06.871397 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ql5zj" event={"ID":"aee9dccb-4475-404d-b169-496cc3ae6a2b","Type":"ContainerStarted","Data":"c339fa7ad4e83fb5976f1214326e916f36bef9cdd1e1eed1a9417d4a57ce5f39"} Feb 27 17:16:06 crc kubenswrapper[4708]: I0227 17:16:06.900395 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-ql5zj" podStartSLOduration=2.882010631 podStartE2EDuration="16.900377568s" podCreationTimestamp="2026-02-27 17:15:50 +0000 UTC" firstStartedPulling="2026-02-27 17:15:51.764281402 +0000 UTC m=+1350.280078989" lastFinishedPulling="2026-02-27 17:16:05.782648299 +0000 UTC m=+1364.298445926" observedRunningTime="2026-02-27 17:16:06.893161493 +0000 UTC m=+1365.408959150" watchObservedRunningTime="2026-02-27 17:16:06.900377568 +0000 UTC m=+1365.416175165" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.074706 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nnndt"] Feb 27 17:16:07 crc kubenswrapper[4708]: E0227 17:16:07.075151 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="988145b2-7dc5-4a8e-8206-bf03ab36fb2a" containerName="keystone-db-sync" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.075170 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="988145b2-7dc5-4a8e-8206-bf03ab36fb2a" containerName="keystone-db-sync" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.075341 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="988145b2-7dc5-4a8e-8206-bf03ab36fb2a" containerName="keystone-db-sync" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.076221 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.091627 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nnndt"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.125256 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-5mx8r"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.126465 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.139352 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.139535 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.139636 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.139790 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shlxn" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.139922 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.167722 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5mx8r"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216353 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-scripts\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216419 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-dns-svc\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216469 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4z4dh\" (UniqueName: \"kubernetes.io/projected/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-kube-api-access-4z4dh\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216514 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216531 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-config-data\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216546 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-config\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216563 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjt6q\" (UniqueName: \"kubernetes.io/projected/fb921991-07a8-478a-b73f-405159a3c2db-kube-api-access-rjt6q\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216582 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216610 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-fernet-keys\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216633 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-credential-keys\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.216661 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-combined-ca-bundle\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.268458 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-jbvsr"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.269541 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.274937 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4f5nw" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.275191 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.275294 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.325077 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jbvsr"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.340702 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-fernet-keys\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.340781 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-credential-keys\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.340900 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-combined-ca-bundle\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.340977 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-scripts\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.341070 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-dns-svc\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.341162 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4z4dh\" (UniqueName: \"kubernetes.io/projected/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-kube-api-access-4z4dh\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.341238 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-config-data\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.341257 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.341276 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-config\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.341299 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjt6q\" (UniqueName: \"kubernetes.io/projected/fb921991-07a8-478a-b73f-405159a3c2db-kube-api-access-rjt6q\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.341328 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.343935 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-nb\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.347294 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-dns-svc\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.371114 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-config\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.376765 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-sb\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.377827 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-fernet-keys\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.391183 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-combined-ca-bundle\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.395218 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-s4ckm"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.416755 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-credential-keys\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.417470 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.428803 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.429607 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4z4dh\" (UniqueName: \"kubernetes.io/projected/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-kube-api-access-4z4dh\") pod \"dnsmasq-dns-f877ddd87-nnndt\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.430114 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hrv76" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.430259 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.431282 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-config-data\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.444737 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-config\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.444805 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-combined-ca-bundle\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.444887 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgjvl\" (UniqueName: \"kubernetes.io/projected/c3f22956-f17c-4339-b166-a3c29355b5d2-kube-api-access-rgjvl\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.456005 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjt6q\" (UniqueName: \"kubernetes.io/projected/fb921991-07a8-478a-b73f-405159a3c2db-kube-api-access-rjt6q\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.457420 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-scripts\") pod \"keystone-bootstrap-5mx8r\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.469919 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.518193 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s4ckm"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.537991 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-ggwzp"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.539207 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.543633 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-smdlt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.543899 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556293 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-config\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556415 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-combined-ca-bundle\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556495 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-combined-ca-bundle\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556589 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rgjvl\" (UniqueName: \"kubernetes.io/projected/c3f22956-f17c-4339-b166-a3c29355b5d2-kube-api-access-rgjvl\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556644 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-db-sync-config-data\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556679 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-etc-machine-id\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556717 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-scripts\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556733 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-config-data\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.556767 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv87w\" (UniqueName: \"kubernetes.io/projected/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-kube-api-access-qv87w\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.558291 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nnndt"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.558678 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.561808 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-combined-ca-bundle\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.563594 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-config\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.566908 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-jgfws"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.568183 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.571612 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.571784 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.571976 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rzhr2" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.584072 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rgjvl\" (UniqueName: \"kubernetes.io/projected/c3f22956-f17c-4339-b166-a3c29355b5d2-kube-api-access-rgjvl\") pod \"neutron-db-sync-jbvsr\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.603288 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ggwzp"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.609705 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jgfws"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.630901 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hzj5k"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.643185 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.644265 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.656708 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-lhfzc"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.658610 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.658781 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-scripts\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.658820 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-config-data\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.658857 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-scripts\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.658881 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qv87w\" (UniqueName: \"kubernetes.io/projected/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-kube-api-access-qv87w\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.658924 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-combined-ca-bundle\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.658961 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-combined-ca-bundle\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659000 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npjt2\" (UniqueName: \"kubernetes.io/projected/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-kube-api-access-npjt2\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659023 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-logs\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659038 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-db-sync-config-data\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659060 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9xr6\" (UniqueName: \"kubernetes.io/projected/dd272ccd-a2cc-433f-80bf-96134126ce6b-kube-api-access-v9xr6\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659085 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-combined-ca-bundle\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659111 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-config-data\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659154 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-db-sync-config-data\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.659184 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-etc-machine-id\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.660724 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-etc-machine-id\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.661724 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.662064 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2sp9f" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.662296 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.662388 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-combined-ca-bundle\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.677555 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.679364 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-db-sync-config-data\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.690699 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hzj5k"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.695355 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qv87w\" (UniqueName: \"kubernetes.io/projected/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-kube-api-access-qv87w\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.696177 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-config-data\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.704682 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-scripts\") pod \"cinder-db-sync-s4ckm\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.735977 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-lhfzc"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.760904 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88j8k\" (UniqueName: \"kubernetes.io/projected/60cc75e4-619b-4cb8-a663-3214b22f2b43-kube-api-access-88j8k\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.760981 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-config-data\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.761022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.761066 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-scripts\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.761087 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.761123 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-combined-ca-bundle\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.762867 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.762916 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnhfv\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-kube-api-access-hnhfv\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.762947 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-combined-ca-bundle\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.762985 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-config-data\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763002 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763074 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-certs\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763091 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-scripts\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763115 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npjt2\" (UniqueName: \"kubernetes.io/projected/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-kube-api-access-npjt2\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763162 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-logs\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763177 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-combined-ca-bundle\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763196 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-db-sync-config-data\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763215 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-config\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.763250 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v9xr6\" (UniqueName: \"kubernetes.io/projected/dd272ccd-a2cc-433f-80bf-96134126ce6b-kube-api-access-v9xr6\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.768249 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-logs\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.773177 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-scripts\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.786511 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/e8a41f59-1fee-425c-a42a-de40caa66c0f-etc-swift\") pod \"swift-storage-0\" (UID: \"e8a41f59-1fee-425c-a42a-de40caa66c0f\") " pod="openstack/swift-storage-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.790982 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9xr6\" (UniqueName: \"kubernetes.io/projected/dd272ccd-a2cc-433f-80bf-96134126ce6b-kube-api-access-v9xr6\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.803213 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-combined-ca-bundle\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.803452 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-config-data\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.804962 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-db-sync-config-data\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.807539 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npjt2\" (UniqueName: \"kubernetes.io/projected/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-kube-api-access-npjt2\") pod \"placement-db-sync-jgfws\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.808189 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-combined-ca-bundle\") pod \"barbican-db-sync-ggwzp\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.838528 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.840590 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.844296 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.844536 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.848212 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865558 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865624 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865811 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnhfv\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-kube-api-access-hnhfv\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865836 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-config-data\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865864 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865895 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-certs\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865912 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-scripts\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865931 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-combined-ca-bundle\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865947 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-config\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.865973 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88j8k\" (UniqueName: \"kubernetes.io/projected/60cc75e4-619b-4cb8-a663-3214b22f2b43-kube-api-access-88j8k\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.867570 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-sb\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.868374 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-nb\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.870906 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-dns-svc\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.872611 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-config-data\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.876765 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-combined-ca-bundle\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.877142 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-certs\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.879513 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-config\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.881567 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88j8k\" (UniqueName: \"kubernetes.io/projected/60cc75e4-619b-4cb8-a663-3214b22f2b43-kube-api-access-88j8k\") pod \"dnsmasq-dns-68dcc9cf6f-hzj5k\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.884065 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.884707 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.887493 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-scripts\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.915150 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnhfv\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-kube-api-access-hnhfv\") pod \"cloudkitty-db-sync-lhfzc\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.920085 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgfws" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.949960 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.969811 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-run-httpd\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.969841 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppg6d\" (UniqueName: \"kubernetes.io/projected/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-kube-api-access-ppg6d\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.969892 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.969916 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-log-httpd\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.969964 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-config-data\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.970025 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.970087 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-scripts\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:07 crc kubenswrapper[4708]: I0227 17:16:07.976787 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.028915 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.071993 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-run-httpd\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.072034 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppg6d\" (UniqueName: \"kubernetes.io/projected/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-kube-api-access-ppg6d\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.072076 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.072100 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-log-httpd\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.072130 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-config-data\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.072160 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.072183 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-scripts\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.075258 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-run-httpd\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.075435 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-log-httpd\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.080613 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-scripts\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.083677 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-config-data\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.091394 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.093321 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.099190 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppg6d\" (UniqueName: \"kubernetes.io/projected/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-kube-api-access-ppg6d\") pod \"ceilometer-0\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.277862 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nnndt"] Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.300899 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-5mx8r"] Feb 27 17:16:08 crc kubenswrapper[4708]: W0227 17:16:08.331950 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeea4ba83_b5c5_49f5_b18f_1c2a667beecc.slice/crio-b96038ada28d6b6e6089acb2a0ff3b6e0fc637e367b836861dda4646d69c11b2 WatchSource:0}: Error finding container b96038ada28d6b6e6089acb2a0ff3b6e0fc637e367b836861dda4646d69c11b2: Status 404 returned error can't find the container with id b96038ada28d6b6e6089acb2a0ff3b6e0fc637e367b836861dda4646d69c11b2 Feb 27 17:16:08 crc kubenswrapper[4708]: W0227 17:16:08.345190 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb921991_07a8_478a_b73f_405159a3c2db.slice/crio-9f0dca03ba2acc816a4df1b840a890583c11b0ada3d3f9557274d98dc88dd44b WatchSource:0}: Error finding container 9f0dca03ba2acc816a4df1b840a890583c11b0ada3d3f9557274d98dc88dd44b: Status 404 returned error can't find the container with id 9f0dca03ba2acc816a4df1b840a890583c11b0ada3d3f9557274d98dc88dd44b Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.353405 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.517901 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-jbvsr"] Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.652611 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-jgfws"] Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.670438 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-ggwzp"] Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.686854 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-s4ckm"] Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.695826 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hzj5k"] Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.901111 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-lhfzc"] Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.909897 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mx8r" event={"ID":"fb921991-07a8-478a-b73f-405159a3c2db","Type":"ContainerStarted","Data":"000cf3a64ca9cac4dc0a575a3326daeb83f13abd60f5f2a76ee98caa21c0a485"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.909941 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mx8r" event={"ID":"fb921991-07a8-478a-b73f-405159a3c2db","Type":"ContainerStarted","Data":"9f0dca03ba2acc816a4df1b840a890583c11b0ada3d3f9557274d98dc88dd44b"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.912273 4708 generic.go:334] "Generic (PLEG): container finished" podID="5534fd9b-068c-43dc-91af-a5014e8bdb24" containerID="af15d0deceef92f05aed99432d465f8cad5a5660703d521bccc8a3ebae507d6a" exitCode=0 Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.912319 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536876-xtn7r" event={"ID":"5534fd9b-068c-43dc-91af-a5014e8bdb24","Type":"ContainerDied","Data":"af15d0deceef92f05aed99432d465f8cad5a5660703d521bccc8a3ebae507d6a"} Feb 27 17:16:08 crc kubenswrapper[4708]: W0227 17:16:08.915937 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod76e1fee2_5549_44d4_aaab_c70ad0fb083e.slice/crio-2c8f24933bbef6410f1f12a9f07c14058d5dca33642d26bafaeb29eb7243b677 WatchSource:0}: Error finding container 2c8f24933bbef6410f1f12a9f07c14058d5dca33642d26bafaeb29eb7243b677: Status 404 returned error can't find the container with id 2c8f24933bbef6410f1f12a9f07c14058d5dca33642d26bafaeb29eb7243b677 Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.916613 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgfws" event={"ID":"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a","Type":"ContainerStarted","Data":"8ee05cb3b59904fde3fc3ef8ac349ba6d7930567f6bb199221d5db37ce02a02b"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.927460 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-5mx8r" podStartSLOduration=1.9274410450000001 podStartE2EDuration="1.927441045s" podCreationTimestamp="2026-02-27 17:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:08.924157502 +0000 UTC m=+1367.439955089" watchObservedRunningTime="2026-02-27 17:16:08.927441045 +0000 UTC m=+1367.443238632" Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.929072 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5107d3e0-ea93-4d89-b36c-f726b481e0e0","Type":"ContainerStarted","Data":"9c446ad5787b1a94d90116389d26215a8790941f711236f023b67411914eeec3"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.931472 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggwzp" event={"ID":"dd272ccd-a2cc-433f-80bf-96134126ce6b","Type":"ContainerStarted","Data":"baa4bdf8ec6ee02a2dcece05a2064e820b9e5fa345cc5d5683f82c96531049c8"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.939985 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" event={"ID":"60cc75e4-619b-4cb8-a663-3214b22f2b43","Type":"ContainerStarted","Data":"3a0c2850e6c4975cdf610cd79927840e6f46a5d4b9550bfbef11253d23af5d66"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.942225 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jbvsr" event={"ID":"c3f22956-f17c-4339-b166-a3c29355b5d2","Type":"ContainerStarted","Data":"8766969c1f49e469e63c843a81cfd147ee700bde1f34b4d64fb37c3682253fd8"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.951889 4708 generic.go:334] "Generic (PLEG): container finished" podID="eea4ba83-b5c5-49f5-b18f-1c2a667beecc" containerID="ecc1e1518b3cce8c892c185e9a942d391de32e837247a53391dabab7698f157a" exitCode=0 Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.951951 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-nnndt" event={"ID":"eea4ba83-b5c5-49f5-b18f-1c2a667beecc","Type":"ContainerDied","Data":"ecc1e1518b3cce8c892c185e9a942d391de32e837247a53391dabab7698f157a"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.951975 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-nnndt" event={"ID":"eea4ba83-b5c5-49f5-b18f-1c2a667beecc","Type":"ContainerStarted","Data":"b96038ada28d6b6e6089acb2a0ff3b6e0fc637e367b836861dda4646d69c11b2"} Feb 27 17:16:08 crc kubenswrapper[4708]: I0227 17:16:08.968988 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s4ckm" event={"ID":"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5","Type":"ContainerStarted","Data":"15e110982efa8d48f8047d340a81a15741809b95dfcb8ff635da1c80e215f375"} Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.006008 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.065534 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.299724 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.365743 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.400334 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-config\") pod \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.400378 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4z4dh\" (UniqueName: \"kubernetes.io/projected/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-kube-api-access-4z4dh\") pod \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.400410 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-dns-svc\") pod \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.401537 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-sb\") pod \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.401606 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-nb\") pod \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\" (UID: \"eea4ba83-b5c5-49f5-b18f-1c2a667beecc\") " Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.406425 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-kube-api-access-4z4dh" (OuterVolumeSpecName: "kube-api-access-4z4dh") pod "eea4ba83-b5c5-49f5-b18f-1c2a667beecc" (UID: "eea4ba83-b5c5-49f5-b18f-1c2a667beecc"). InnerVolumeSpecName "kube-api-access-4z4dh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.446081 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-config" (OuterVolumeSpecName: "config") pod "eea4ba83-b5c5-49f5-b18f-1c2a667beecc" (UID: "eea4ba83-b5c5-49f5-b18f-1c2a667beecc"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.459358 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "eea4ba83-b5c5-49f5-b18f-1c2a667beecc" (UID: "eea4ba83-b5c5-49f5-b18f-1c2a667beecc"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.467488 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "eea4ba83-b5c5-49f5-b18f-1c2a667beecc" (UID: "eea4ba83-b5c5-49f5-b18f-1c2a667beecc"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.473365 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "eea4ba83-b5c5-49f5-b18f-1c2a667beecc" (UID: "eea4ba83-b5c5-49f5-b18f-1c2a667beecc"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.503211 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.503243 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.503254 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4z4dh\" (UniqueName: \"kubernetes.io/projected/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-kube-api-access-4z4dh\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.503263 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.503272 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/eea4ba83-b5c5-49f5-b18f-1c2a667beecc-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.978809 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f877ddd87-nnndt" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.979170 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f877ddd87-nnndt" event={"ID":"eea4ba83-b5c5-49f5-b18f-1c2a667beecc","Type":"ContainerDied","Data":"b96038ada28d6b6e6089acb2a0ff3b6e0fc637e367b836861dda4646d69c11b2"} Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.980724 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"09e3e541208484b4d6e50dc239440dff048498fad6ae15ebf001f4915be99f1f"} Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.980766 4708 scope.go:117] "RemoveContainer" containerID="ecc1e1518b3cce8c892c185e9a942d391de32e837247a53391dabab7698f157a" Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.988025 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/prometheus-metric-storage-0" event={"ID":"5107d3e0-ea93-4d89-b36c-f726b481e0e0","Type":"ContainerStarted","Data":"15d59b532a51575d1f72f7a9f8ec78026f0548666de601949adceb705f4378e7"} Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.989730 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d047b4cb-8a38-4b0b-b667-0b78aeb2a166","Type":"ContainerStarted","Data":"8a5346cdee744ed3f3957aefba2ee609d1109de637e189b1dda554792bdcac72"} Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.992217 4708 generic.go:334] "Generic (PLEG): container finished" podID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerID="d540d71d9b8916e4c9b64513cde7622af353797043d549c4855fbe423659a731" exitCode=0 Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.992257 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" event={"ID":"60cc75e4-619b-4cb8-a663-3214b22f2b43","Type":"ContainerDied","Data":"d540d71d9b8916e4c9b64513cde7622af353797043d549c4855fbe423659a731"} Feb 27 17:16:09 crc kubenswrapper[4708]: I0227 17:16:09.998079 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lhfzc" event={"ID":"76e1fee2-5549-44d4-aaab-c70ad0fb083e","Type":"ContainerStarted","Data":"2c8f24933bbef6410f1f12a9f07c14058d5dca33642d26bafaeb29eb7243b677"} Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.022776 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jbvsr" event={"ID":"c3f22956-f17c-4339-b166-a3c29355b5d2","Type":"ContainerStarted","Data":"a7e5563519d075cc03eeb2eafcb7bf8dd8bb05152315ecb7a3b88557da4e5208"} Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.024811 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/prometheus-metric-storage-0" podStartSLOduration=22.024794786 podStartE2EDuration="22.024794786s" podCreationTimestamp="2026-02-27 17:15:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:10.013987859 +0000 UTC m=+1368.529785466" watchObservedRunningTime="2026-02-27 17:16:10.024794786 +0000 UTC m=+1368.540592373" Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.261971 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-jbvsr" podStartSLOduration=3.2619497539999998 podStartE2EDuration="3.261949754s" podCreationTimestamp="2026-02-27 17:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:10.057398521 +0000 UTC m=+1368.573196108" watchObservedRunningTime="2026-02-27 17:16:10.261949754 +0000 UTC m=+1368.777747341" Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.282907 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nnndt"] Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.303244 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f877ddd87-nnndt"] Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.433448 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-xtn7r" Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.529372 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sbr4\" (UniqueName: \"kubernetes.io/projected/5534fd9b-068c-43dc-91af-a5014e8bdb24-kube-api-access-6sbr4\") pod \"5534fd9b-068c-43dc-91af-a5014e8bdb24\" (UID: \"5534fd9b-068c-43dc-91af-a5014e8bdb24\") " Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.537713 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5534fd9b-068c-43dc-91af-a5014e8bdb24-kube-api-access-6sbr4" (OuterVolumeSpecName: "kube-api-access-6sbr4") pod "5534fd9b-068c-43dc-91af-a5014e8bdb24" (UID: "5534fd9b-068c-43dc-91af-a5014e8bdb24"). InnerVolumeSpecName "kube-api-access-6sbr4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:10 crc kubenswrapper[4708]: I0227 17:16:10.632023 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sbr4\" (UniqueName: \"kubernetes.io/projected/5534fd9b-068c-43dc-91af-a5014e8bdb24-kube-api-access-6sbr4\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.044879 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536876-xtn7r" event={"ID":"5534fd9b-068c-43dc-91af-a5014e8bdb24","Type":"ContainerDied","Data":"e10e7f829be8ae05874577a74c275666ef3a72553791430cb856446695f34417"} Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.045148 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e10e7f829be8ae05874577a74c275666ef3a72553791430cb856446695f34417" Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.045257 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536876-xtn7r" Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.052153 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" event={"ID":"60cc75e4-619b-4cb8-a663-3214b22f2b43","Type":"ContainerStarted","Data":"c4fc70eb344877614cd7b1bf19cc6a2fd4b8f3b0ee9dd7ddb68b6e1ce272e2ca"} Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.052729 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.094326 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" podStartSLOduration=4.094305398 podStartE2EDuration="4.094305398s" podCreationTimestamp="2026-02-27 17:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:11.073769345 +0000 UTC m=+1369.589566932" watchObservedRunningTime="2026-02-27 17:16:11.094305398 +0000 UTC m=+1369.610102985" Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.482771 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-zwf6n"] Feb 27 17:16:11 crc kubenswrapper[4708]: I0227 17:16:11.490073 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536870-zwf6n"] Feb 27 17:16:12 crc kubenswrapper[4708]: I0227 17:16:12.243988 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="472fdd57-63d6-48e4-90b4-ea859313d030" path="/var/lib/kubelet/pods/472fdd57-63d6-48e4-90b4-ea859313d030/volumes" Feb 27 17:16:12 crc kubenswrapper[4708]: I0227 17:16:12.245016 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eea4ba83-b5c5-49f5-b18f-1c2a667beecc" path="/var/lib/kubelet/pods/eea4ba83-b5c5-49f5-b18f-1c2a667beecc/volumes" Feb 27 17:16:14 crc kubenswrapper[4708]: I0227 17:16:14.266016 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/prometheus-metric-storage-0" Feb 27 17:16:15 crc kubenswrapper[4708]: I0227 17:16:15.124686 4708 generic.go:334] "Generic (PLEG): container finished" podID="fb921991-07a8-478a-b73f-405159a3c2db" containerID="000cf3a64ca9cac4dc0a575a3326daeb83f13abd60f5f2a76ee98caa21c0a485" exitCode=0 Feb 27 17:16:15 crc kubenswrapper[4708]: I0227 17:16:15.125004 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mx8r" event={"ID":"fb921991-07a8-478a-b73f-405159a3c2db","Type":"ContainerDied","Data":"000cf3a64ca9cac4dc0a575a3326daeb83f13abd60f5f2a76ee98caa21c0a485"} Feb 27 17:16:17 crc kubenswrapper[4708]: I0227 17:16:17.979102 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:16:18 crc kubenswrapper[4708]: I0227 17:16:18.049471 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-svwxj"] Feb 27 17:16:18 crc kubenswrapper[4708]: I0227 17:16:18.049685 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-698758b865-svwxj" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" containerID="cri-o://4ef9315f37c4e43eb2653e30684c7f05d89304dfd469839b4de9cc866ad329d4" gracePeriod=10 Feb 27 17:16:19 crc kubenswrapper[4708]: I0227 17:16:19.111119 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-svwxj" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 27 17:16:19 crc kubenswrapper[4708]: I0227 17:16:19.192024 4708 generic.go:334] "Generic (PLEG): container finished" podID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerID="4ef9315f37c4e43eb2653e30684c7f05d89304dfd469839b4de9cc866ad329d4" exitCode=0 Feb 27 17:16:19 crc kubenswrapper[4708]: I0227 17:16:19.192089 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-svwxj" event={"ID":"3eca4c12-77bb-4e32-9738-1d29f1d2174a","Type":"ContainerDied","Data":"4ef9315f37c4e43eb2653e30684c7f05d89304dfd469839b4de9cc866ad329d4"} Feb 27 17:16:19 crc kubenswrapper[4708]: I0227 17:16:19.265834 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/prometheus-metric-storage-0" Feb 27 17:16:19 crc kubenswrapper[4708]: I0227 17:16:19.275027 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/prometheus-metric-storage-0" Feb 27 17:16:20 crc kubenswrapper[4708]: I0227 17:16:20.213444 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/prometheus-metric-storage-0" Feb 27 17:16:24 crc kubenswrapper[4708]: I0227 17:16:24.111544 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-svwxj" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 27 17:16:29 crc kubenswrapper[4708]: I0227 17:16:29.111763 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-svwxj" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: connect: connection refused" Feb 27 17:16:29 crc kubenswrapper[4708]: I0227 17:16:29.112291 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.178278 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.313725 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-combined-ca-bundle\") pod \"fb921991-07a8-478a-b73f-405159a3c2db\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.314291 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-fernet-keys\") pod \"fb921991-07a8-478a-b73f-405159a3c2db\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.314354 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-credential-keys\") pod \"fb921991-07a8-478a-b73f-405159a3c2db\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.314390 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjt6q\" (UniqueName: \"kubernetes.io/projected/fb921991-07a8-478a-b73f-405159a3c2db-kube-api-access-rjt6q\") pod \"fb921991-07a8-478a-b73f-405159a3c2db\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.314622 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-config-data\") pod \"fb921991-07a8-478a-b73f-405159a3c2db\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.314799 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-scripts\") pod \"fb921991-07a8-478a-b73f-405159a3c2db\" (UID: \"fb921991-07a8-478a-b73f-405159a3c2db\") " Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.322000 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "fb921991-07a8-478a-b73f-405159a3c2db" (UID: "fb921991-07a8-478a-b73f-405159a3c2db"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.329871 4708 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.344976 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "fb921991-07a8-478a-b73f-405159a3c2db" (UID: "fb921991-07a8-478a-b73f-405159a3c2db"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.345718 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-scripts" (OuterVolumeSpecName: "scripts") pod "fb921991-07a8-478a-b73f-405159a3c2db" (UID: "fb921991-07a8-478a-b73f-405159a3c2db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.352049 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-5mx8r" event={"ID":"fb921991-07a8-478a-b73f-405159a3c2db","Type":"ContainerDied","Data":"9f0dca03ba2acc816a4df1b840a890583c11b0ada3d3f9557274d98dc88dd44b"} Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.352080 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f0dca03ba2acc816a4df1b840a890583c11b0ada3d3f9557274d98dc88dd44b" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.352128 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-5mx8r" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.354937 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-config-data" (OuterVolumeSpecName: "config-data") pod "fb921991-07a8-478a-b73f-405159a3c2db" (UID: "fb921991-07a8-478a-b73f-405159a3c2db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.358141 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb921991-07a8-478a-b73f-405159a3c2db-kube-api-access-rjt6q" (OuterVolumeSpecName: "kube-api-access-rjt6q") pod "fb921991-07a8-478a-b73f-405159a3c2db" (UID: "fb921991-07a8-478a-b73f-405159a3c2db"). InnerVolumeSpecName "kube-api-access-rjt6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.389465 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fb921991-07a8-478a-b73f-405159a3c2db" (UID: "fb921991-07a8-478a-b73f-405159a3c2db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.433114 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.434020 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.434059 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.434129 4708 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/fb921991-07a8-478a-b73f-405159a3c2db-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:32 crc kubenswrapper[4708]: I0227 17:16:32.434143 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjt6q\" (UniqueName: \"kubernetes.io/projected/fb921991-07a8-478a-b73f-405159a3c2db-kube-api-access-rjt6q\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.288401 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-5mx8r"] Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.300685 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-5mx8r"] Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.364822 4708 generic.go:334] "Generic (PLEG): container finished" podID="aee9dccb-4475-404d-b169-496cc3ae6a2b" containerID="c339fa7ad4e83fb5976f1214326e916f36bef9cdd1e1eed1a9417d4a57ce5f39" exitCode=0 Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.364892 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ql5zj" event={"ID":"aee9dccb-4475-404d-b169-496cc3ae6a2b","Type":"ContainerDied","Data":"c339fa7ad4e83fb5976f1214326e916f36bef9cdd1e1eed1a9417d4a57ce5f39"} Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.415059 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-lzgr7"] Feb 27 17:16:33 crc kubenswrapper[4708]: E0227 17:16:33.415480 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb921991-07a8-478a-b73f-405159a3c2db" containerName="keystone-bootstrap" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.415499 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb921991-07a8-478a-b73f-405159a3c2db" containerName="keystone-bootstrap" Feb 27 17:16:33 crc kubenswrapper[4708]: E0227 17:16:33.415516 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5534fd9b-068c-43dc-91af-a5014e8bdb24" containerName="oc" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.415523 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5534fd9b-068c-43dc-91af-a5014e8bdb24" containerName="oc" Feb 27 17:16:33 crc kubenswrapper[4708]: E0227 17:16:33.415536 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eea4ba83-b5c5-49f5-b18f-1c2a667beecc" containerName="init" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.415544 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="eea4ba83-b5c5-49f5-b18f-1c2a667beecc" containerName="init" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.415716 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5534fd9b-068c-43dc-91af-a5014e8bdb24" containerName="oc" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.415733 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb921991-07a8-478a-b73f-405159a3c2db" containerName="keystone-bootstrap" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.415751 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="eea4ba83-b5c5-49f5-b18f-1c2a667beecc" containerName="init" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.416397 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.418336 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.418955 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.419402 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shlxn" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.419585 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.425680 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lzgr7"] Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.453076 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-fernet-keys\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.453136 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-combined-ca-bundle\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.453204 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgb6c\" (UniqueName: \"kubernetes.io/projected/108a278a-0da6-4e63-be97-cea8279e7c99-kube-api-access-cgb6c\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.453224 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-credential-keys\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.453301 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-scripts\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.453339 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-config-data\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.554969 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-fernet-keys\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.555021 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-combined-ca-bundle\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.555058 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgb6c\" (UniqueName: \"kubernetes.io/projected/108a278a-0da6-4e63-be97-cea8279e7c99-kube-api-access-cgb6c\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.555075 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-credential-keys\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.555108 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-scripts\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.555132 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-config-data\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.559958 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-scripts\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.560041 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-credential-keys\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.560208 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-fernet-keys\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.560980 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-combined-ca-bundle\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.566418 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-config-data\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.572302 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgb6c\" (UniqueName: \"kubernetes.io/projected/108a278a-0da6-4e63-be97-cea8279e7c99-kube-api-access-cgb6c\") pod \"keystone-bootstrap-lzgr7\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:33 crc kubenswrapper[4708]: I0227 17:16:33.746744 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:16:34 crc kubenswrapper[4708]: I0227 17:16:34.257218 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb921991-07a8-478a-b73f-405159a3c2db" path="/var/lib/kubelet/pods/fb921991-07a8-478a-b73f-405159a3c2db/volumes" Feb 27 17:16:35 crc kubenswrapper[4708]: I0227 17:16:35.633940 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:16:35 crc kubenswrapper[4708]: I0227 17:16:35.634208 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:16:35 crc kubenswrapper[4708]: I0227 17:16:35.634250 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:16:35 crc kubenswrapper[4708]: I0227 17:16:35.634949 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1a4a3b793414b4b10c54d77ec77375b6657e6d822660a8ebe494db8ea78162c"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:16:35 crc kubenswrapper[4708]: I0227 17:16:35.634992 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://c1a4a3b793414b4b10c54d77ec77375b6657e6d822660a8ebe494db8ea78162c" gracePeriod=600 Feb 27 17:16:36 crc kubenswrapper[4708]: I0227 17:16:36.394418 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="c1a4a3b793414b4b10c54d77ec77375b6657e6d822660a8ebe494db8ea78162c" exitCode=0 Feb 27 17:16:36 crc kubenswrapper[4708]: I0227 17:16:36.394457 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"c1a4a3b793414b4b10c54d77ec77375b6657e6d822660a8ebe494db8ea78162c"} Feb 27 17:16:36 crc kubenswrapper[4708]: I0227 17:16:36.394491 4708 scope.go:117] "RemoveContainer" containerID="39dbd7797d34062ee99cfd72758adf14eea4f4680611bae0c80a2a4882b14a2d" Feb 27 17:16:37 crc kubenswrapper[4708]: E0227 17:16:37.075932 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified" Feb 27 17:16:37 crc kubenswrapper[4708]: E0227 17:16:37.076535 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:barbican-db-sync,Image:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,Command:[/bin/bash],Args:[-c barbican-manage db upgrade],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/barbican/barbican.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v9xr6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42403,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*42403,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod barbican-db-sync-ggwzp_openstack(dd272ccd-a2cc-433f-80bf-96134126ce6b): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:16:37 crc kubenswrapper[4708]: E0227 17:16:37.079718 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/barbican-db-sync-ggwzp" podUID="dd272ccd-a2cc-433f-80bf-96134126ce6b" Feb 27 17:16:37 crc kubenswrapper[4708]: E0227 17:16:37.403686 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"barbican-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified\\\"\"" pod="openstack/barbican-db-sync-ggwzp" podUID="dd272ccd-a2cc-433f-80bf-96134126ce6b" Feb 27 17:16:37 crc kubenswrapper[4708]: E0227 17:16:37.492737 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified" Feb 27 17:16:37 crc kubenswrapper[4708]: E0227 17:16:37.492950 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:ceilometer-central-agent,Image:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n579h66dh649h58ch86h567h68bh5c8hb7hdbh689h86hfch5cbh86h8ch6ch677h8fh7fh98hbfh55bh569h664h8chdfhbchcbh698h548hb6q,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:ceilometer-central-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppg6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/python3 /var/lib/openstack/bin/centralhealth.py],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:300,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d047b4cb-8a38-4b0b-b667-0b78aeb2a166): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:16:38 crc kubenswrapper[4708]: E0227 17:16:38.611986 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Feb 27 17:16:38 crc kubenswrapper[4708]: E0227 17:16:38.612479 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qv87w,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-s4ckm_openstack(57f4cfb1-705b-40bb-b7aa-d722d1ec00c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:16:38 crc kubenswrapper[4708]: E0227 17:16:38.613713 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-s4ckm" podUID="57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.698168 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ql5zj" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.708665 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870217 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-config\") pod \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870297 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-combined-ca-bundle\") pod \"aee9dccb-4475-404d-b169-496cc3ae6a2b\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870321 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-nb\") pod \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870367 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-db-sync-config-data\") pod \"aee9dccb-4475-404d-b169-496cc3ae6a2b\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870468 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-config-data\") pod \"aee9dccb-4475-404d-b169-496cc3ae6a2b\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870489 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-sb\") pod \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870518 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bf5w\" (UniqueName: \"kubernetes.io/projected/aee9dccb-4475-404d-b169-496cc3ae6a2b-kube-api-access-6bf5w\") pod \"aee9dccb-4475-404d-b169-496cc3ae6a2b\" (UID: \"aee9dccb-4475-404d-b169-496cc3ae6a2b\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870558 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8ls9\" (UniqueName: \"kubernetes.io/projected/3eca4c12-77bb-4e32-9738-1d29f1d2174a-kube-api-access-z8ls9\") pod \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.870606 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-dns-svc\") pod \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\" (UID: \"3eca4c12-77bb-4e32-9738-1d29f1d2174a\") " Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.876936 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aee9dccb-4475-404d-b169-496cc3ae6a2b-kube-api-access-6bf5w" (OuterVolumeSpecName: "kube-api-access-6bf5w") pod "aee9dccb-4475-404d-b169-496cc3ae6a2b" (UID: "aee9dccb-4475-404d-b169-496cc3ae6a2b"). InnerVolumeSpecName "kube-api-access-6bf5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.882259 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3eca4c12-77bb-4e32-9738-1d29f1d2174a-kube-api-access-z8ls9" (OuterVolumeSpecName: "kube-api-access-z8ls9") pod "3eca4c12-77bb-4e32-9738-1d29f1d2174a" (UID: "3eca4c12-77bb-4e32-9738-1d29f1d2174a"). InnerVolumeSpecName "kube-api-access-z8ls9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.899014 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aee9dccb-4475-404d-b169-496cc3ae6a2b" (UID: "aee9dccb-4475-404d-b169-496cc3ae6a2b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.899084 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "aee9dccb-4475-404d-b169-496cc3ae6a2b" (UID: "aee9dccb-4475-404d-b169-496cc3ae6a2b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.921641 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3eca4c12-77bb-4e32-9738-1d29f1d2174a" (UID: "3eca4c12-77bb-4e32-9738-1d29f1d2174a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.928385 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3eca4c12-77bb-4e32-9738-1d29f1d2174a" (UID: "3eca4c12-77bb-4e32-9738-1d29f1d2174a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.935152 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3eca4c12-77bb-4e32-9738-1d29f1d2174a" (UID: "3eca4c12-77bb-4e32-9738-1d29f1d2174a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.943092 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-config-data" (OuterVolumeSpecName: "config-data") pod "aee9dccb-4475-404d-b169-496cc3ae6a2b" (UID: "aee9dccb-4475-404d-b169-496cc3ae6a2b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.943427 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-config" (OuterVolumeSpecName: "config") pod "3eca4c12-77bb-4e32-9738-1d29f1d2174a" (UID: "3eca4c12-77bb-4e32-9738-1d29f1d2174a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972319 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972373 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972386 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972395 4708 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972404 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aee9dccb-4475-404d-b169-496cc3ae6a2b-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972412 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972422 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6bf5w\" (UniqueName: \"kubernetes.io/projected/aee9dccb-4475-404d-b169-496cc3ae6a2b-kube-api-access-6bf5w\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972431 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8ls9\" (UniqueName: \"kubernetes.io/projected/3eca4c12-77bb-4e32-9738-1d29f1d2174a-kube-api-access-z8ls9\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:38 crc kubenswrapper[4708]: I0227 17:16:38.972439 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3eca4c12-77bb-4e32-9738-1d29f1d2174a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.111995 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-698758b865-svwxj" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.135:5353: i/o timeout" Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.419411 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-ql5zj" Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.419411 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-ql5zj" event={"ID":"aee9dccb-4475-404d-b169-496cc3ae6a2b","Type":"ContainerDied","Data":"cf01239e0c7c8d6603ee86dbc358d3665517f288058cedef7a9a5b69223ce8fe"} Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.419555 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cf01239e0c7c8d6603ee86dbc358d3665517f288058cedef7a9a5b69223ce8fe" Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.421267 4708 generic.go:334] "Generic (PLEG): container finished" podID="c3f22956-f17c-4339-b166-a3c29355b5d2" containerID="a7e5563519d075cc03eeb2eafcb7bf8dd8bb05152315ecb7a3b88557da4e5208" exitCode=0 Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.421308 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jbvsr" event={"ID":"c3f22956-f17c-4339-b166-a3c29355b5d2","Type":"ContainerDied","Data":"a7e5563519d075cc03eeb2eafcb7bf8dd8bb05152315ecb7a3b88557da4e5208"} Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.423768 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-698758b865-svwxj" event={"ID":"3eca4c12-77bb-4e32-9738-1d29f1d2174a","Type":"ContainerDied","Data":"858bf0da8c7ad137cfa66d424ee1113c0e2c8239546ea54c4cb0da4d71673a60"} Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.423816 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-698758b865-svwxj" Feb 27 17:16:39 crc kubenswrapper[4708]: E0227 17:16:39.431079 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-s4ckm" podUID="57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.471264 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-698758b865-svwxj"] Feb 27 17:16:39 crc kubenswrapper[4708]: I0227 17:16:39.477527 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-698758b865-svwxj"] Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.241751 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" path="/var/lib/kubelet/pods/3eca4c12-77bb-4e32-9738-1d29f1d2174a/volumes" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.249322 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-6jhl5"] Feb 27 17:16:40 crc kubenswrapper[4708]: E0227 17:16:40.249736 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="init" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.249797 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="init" Feb 27 17:16:40 crc kubenswrapper[4708]: E0227 17:16:40.249882 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.249934 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" Feb 27 17:16:40 crc kubenswrapper[4708]: E0227 17:16:40.250014 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aee9dccb-4475-404d-b169-496cc3ae6a2b" containerName="glance-db-sync" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.250071 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="aee9dccb-4475-404d-b169-496cc3ae6a2b" containerName="glance-db-sync" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.250276 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="aee9dccb-4475-404d-b169-496cc3ae6a2b" containerName="glance-db-sync" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.250340 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3eca4c12-77bb-4e32-9738-1d29f1d2174a" containerName="dnsmasq-dns" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.254317 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-6jhl5"] Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.254467 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.411755 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.411838 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.411905 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6djq\" (UniqueName: \"kubernetes.io/projected/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-kube-api-access-t6djq\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.411931 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-dns-svc\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.411948 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-config\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.513278 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.513594 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.513633 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6djq\" (UniqueName: \"kubernetes.io/projected/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-kube-api-access-t6djq\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.513658 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-dns-svc\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.513676 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-config\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.514455 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-sb\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.514562 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-dns-svc\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.514674 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-config\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.514803 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-nb\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.551823 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6djq\" (UniqueName: \"kubernetes.io/projected/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-kube-api-access-t6djq\") pod \"dnsmasq-dns-f84976bdf-6jhl5\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:40 crc kubenswrapper[4708]: I0227 17:16:40.596881 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.141593 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.146650 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.150628 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.150900 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-8v89l" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.152790 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.161770 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.330654 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.330747 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-logs\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.330910 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.331026 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.331096 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.331120 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.331265 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8f92l\" (UniqueName: \"kubernetes.io/projected/93933999-13bd-459a-a885-07cc77031a9f-kube-api-access-8f92l\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434187 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434248 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-logs\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434296 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434347 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434387 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434406 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434434 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8f92l\" (UniqueName: \"kubernetes.io/projected/93933999-13bd-459a-a885-07cc77031a9f-kube-api-access-8f92l\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.434678 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-logs\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.437336 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.439751 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.439795 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/852cd9e461d89b39e32be31d4cb707ef1d2abb65b96de01c0d2dcb097d159f7c/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.440144 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.440700 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-scripts\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.446321 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-config-data\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.455408 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8f92l\" (UniqueName: \"kubernetes.io/projected/93933999-13bd-459a-a885-07cc77031a9f-kube-api-access-8f92l\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.474780 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.524917 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.528874 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.531389 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.537923 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.638336 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-logs\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.638389 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.638411 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqq4w\" (UniqueName: \"kubernetes.io/projected/664c1ac7-370e-452a-b7a9-ac087d06cfc9-kube-api-access-dqq4w\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.638462 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.638530 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.638573 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.638600 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.741991 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.742091 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.742145 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.742171 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.742199 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-logs\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.742225 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.742242 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqq4w\" (UniqueName: \"kubernetes.io/projected/664c1ac7-370e-452a-b7a9-ac087d06cfc9-kube-api-access-dqq4w\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.746071 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.746541 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-logs\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.752095 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.752123 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6cf9b68842a44daba5610601208ef38850856ede1b5f40d133ba6995034e3af2/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.761572 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.762216 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-scripts\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.762326 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.762574 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-config-data\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.769482 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqq4w\" (UniqueName: \"kubernetes.io/projected/664c1ac7-370e-452a-b7a9-ac087d06cfc9-kube-api-access-dqq4w\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.794778 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.886209 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:41 crc kubenswrapper[4708]: I0227 17:16:41.913745 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.046995 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-combined-ca-bundle\") pod \"c3f22956-f17c-4339-b166-a3c29355b5d2\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.047097 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rgjvl\" (UniqueName: \"kubernetes.io/projected/c3f22956-f17c-4339-b166-a3c29355b5d2-kube-api-access-rgjvl\") pod \"c3f22956-f17c-4339-b166-a3c29355b5d2\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.047143 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-config\") pod \"c3f22956-f17c-4339-b166-a3c29355b5d2\" (UID: \"c3f22956-f17c-4339-b166-a3c29355b5d2\") " Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.051417 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3f22956-f17c-4339-b166-a3c29355b5d2-kube-api-access-rgjvl" (OuterVolumeSpecName: "kube-api-access-rgjvl") pod "c3f22956-f17c-4339-b166-a3c29355b5d2" (UID: "c3f22956-f17c-4339-b166-a3c29355b5d2"). InnerVolumeSpecName "kube-api-access-rgjvl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.084040 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-config" (OuterVolumeSpecName: "config") pod "c3f22956-f17c-4339-b166-a3c29355b5d2" (UID: "c3f22956-f17c-4339-b166-a3c29355b5d2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.089948 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c3f22956-f17c-4339-b166-a3c29355b5d2" (UID: "c3f22956-f17c-4339-b166-a3c29355b5d2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.149119 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.149154 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rgjvl\" (UniqueName: \"kubernetes.io/projected/c3f22956-f17c-4339-b166-a3c29355b5d2-kube-api-access-rgjvl\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.149165 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/c3f22956-f17c-4339-b166-a3c29355b5d2-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.461362 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-jbvsr" event={"ID":"c3f22956-f17c-4339-b166-a3c29355b5d2","Type":"ContainerDied","Data":"8766969c1f49e469e63c843a81cfd147ee700bde1f34b4d64fb37c3682253fd8"} Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.461448 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8766969c1f49e469e63c843a81cfd147ee700bde1f34b4d64fb37c3682253fd8" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.461537 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-jbvsr" Feb 27 17:16:42 crc kubenswrapper[4708]: I0227 17:16:42.970594 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.094229 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.141777 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-6jhl5"] Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.172269 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fb745b69-c24g9"] Feb 27 17:16:43 crc kubenswrapper[4708]: E0227 17:16:43.172658 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3f22956-f17c-4339-b166-a3c29355b5d2" containerName="neutron-db-sync" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.172671 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3f22956-f17c-4339-b166-a3c29355b5d2" containerName="neutron-db-sync" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.172861 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3f22956-f17c-4339-b166-a3c29355b5d2" containerName="neutron-db-sync" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.173872 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.195393 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-c24g9"] Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.270012 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.270064 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d44s7\" (UniqueName: \"kubernetes.io/projected/5e596e79-d862-49bc-b016-afaaab6828f8-kube-api-access-d44s7\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.270127 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.270182 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-dns-svc\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.270209 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-config\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.281163 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-d59d57f6-95wt9"] Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.282967 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.290543 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-4f5nw" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.294974 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.295137 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.300381 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.304623 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d59d57f6-95wt9"] Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.371916 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-config\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.371965 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-combined-ca-bundle\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372006 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-dns-svc\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372035 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-config\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372066 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-httpd-config\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372091 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvhv5\" (UniqueName: \"kubernetes.io/projected/0b006312-c735-4397-96d7-0f742b67af82-kube-api-access-cvhv5\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372111 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-ovndb-tls-certs\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372160 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372180 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d44s7\" (UniqueName: \"kubernetes.io/projected/5e596e79-d862-49bc-b016-afaaab6828f8-kube-api-access-d44s7\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.372244 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.373127 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-nb\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.373643 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-sb\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.374632 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-dns-svc\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.375100 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-config\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.391184 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d44s7\" (UniqueName: \"kubernetes.io/projected/5e596e79-d862-49bc-b016-afaaab6828f8-kube-api-access-d44s7\") pod \"dnsmasq-dns-fb745b69-c24g9\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.474306 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-httpd-config\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.474354 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvhv5\" (UniqueName: \"kubernetes.io/projected/0b006312-c735-4397-96d7-0f742b67af82-kube-api-access-cvhv5\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.474385 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-ovndb-tls-certs\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.474498 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-config\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.474523 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-combined-ca-bundle\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.483468 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-httpd-config\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.485600 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-combined-ca-bundle\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.492469 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-ovndb-tls-certs\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.493247 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-config\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.501832 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvhv5\" (UniqueName: \"kubernetes.io/projected/0b006312-c735-4397-96d7-0f742b67af82-kube-api-access-cvhv5\") pod \"neutron-d59d57f6-95wt9\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.508036 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:43 crc kubenswrapper[4708]: I0227 17:16:43.607640 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.130425 4708 scope.go:117] "RemoveContainer" containerID="4ef9315f37c4e43eb2653e30684c7f05d89304dfd469839b4de9cc866ad329d4" Feb 27 17:16:46 crc kubenswrapper[4708]: E0227 17:16:46.157038 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 27 17:16:46 crc kubenswrapper[4708]: E0227 17:16:46.157087 4708 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current" Feb 27 17:16:46 crc kubenswrapper[4708]: E0227 17:16:46.157242 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cloudkitty-db-sync,Image:quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CloudKittyPassword,Value:,ValueFrom:&EnvVarSource{FieldRef:nil,ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:&SecretKeySelector{LocalObjectReference:LocalObjectReference{Name:osp-secret,},Key:CloudKittyPassword,Optional:nil,},},},EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:scripts,ReadOnly:true,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:cloudkitty-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:certs,ReadOnly:true,MountPath:/var/lib/openstack/loki-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hnhfv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42406,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cloudkitty-db-sync-lhfzc_openstack(76e1fee2-5549-44d4-aaab-c70ad0fb083e): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:16:46 crc kubenswrapper[4708]: E0227 17:16:46.158514 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cloudkitty-db-sync-lhfzc" podUID="76e1fee2-5549-44d4-aaab-c70ad0fb083e" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.313419 4708 scope.go:117] "RemoveContainer" containerID="c59b96556c4590204aaef72112417d7abd8bd28ea7832b1b131569c535cf744f" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.354765 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-556cb97757-rbj2s"] Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.356462 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.358421 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.361248 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.374362 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-556cb97757-rbj2s"] Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.430619 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-config\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.430919 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-public-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.430938 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvc6d\" (UniqueName: \"kubernetes.io/projected/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-kube-api-access-jvc6d\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.430961 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-internal-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.430986 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-combined-ca-bundle\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.431023 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-ovndb-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.431113 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-httpd-config\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: E0227 17:16:46.503336 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cloudkitty-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.rdoproject.org/podified-master-centos10/openstack-cloudkitty-api:current\\\"\"" pod="openstack/cloudkitty-db-sync-lhfzc" podUID="76e1fee2-5549-44d4-aaab-c70ad0fb083e" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.532895 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-combined-ca-bundle\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.532986 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-ovndb-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.533125 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-httpd-config\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.533157 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-config\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.533224 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-public-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.533271 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvc6d\" (UniqueName: \"kubernetes.io/projected/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-kube-api-access-jvc6d\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.533298 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-internal-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.539818 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-internal-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.540130 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-combined-ca-bundle\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.544465 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-config\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.544824 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-ovndb-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.551406 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvc6d\" (UniqueName: \"kubernetes.io/projected/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-kube-api-access-jvc6d\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.557767 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-httpd-config\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.563906 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-public-tls-certs\") pod \"neutron-556cb97757-rbj2s\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.652821 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-lzgr7"] Feb 27 17:16:46 crc kubenswrapper[4708]: I0227 17:16:46.703368 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:46 crc kubenswrapper[4708]: W0227 17:16:46.793300 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod108a278a_0da6_4e63_be97_cea8279e7c99.slice/crio-04fd7c7c397a6489e24666ff67ff0b893fb3963172b9a40b1517839429ccedc4 WatchSource:0}: Error finding container 04fd7c7c397a6489e24666ff67ff0b893fb3963172b9a40b1517839429ccedc4: Status 404 returned error can't find the container with id 04fd7c7c397a6489e24666ff67ff0b893fb3963172b9a40b1517839429ccedc4 Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.097862 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:47 crc kubenswrapper[4708]: W0227 17:16:47.261996 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod93933999_13bd_459a_a885_07cc77031a9f.slice/crio-37de49fffc48eed519cfce22635dc674c3b79ccc587ae9e708c5d5cd98431e70 WatchSource:0}: Error finding container 37de49fffc48eed519cfce22635dc674c3b79ccc587ae9e708c5d5cd98431e70: Status 404 returned error can't find the container with id 37de49fffc48eed519cfce22635dc674c3b79ccc587ae9e708c5d5cd98431e70 Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.375905 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-6jhl5"] Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.535217 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" event={"ID":"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2","Type":"ContainerStarted","Data":"bc60b6d1945de0bfd0e862c744e4665847c254efb1c4153708a9eb6e70d69309"} Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.546208 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e"} Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.554996 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzgr7" event={"ID":"108a278a-0da6-4e63-be97-cea8279e7c99","Type":"ContainerStarted","Data":"04fd7c7c397a6489e24666ff67ff0b893fb3963172b9a40b1517839429ccedc4"} Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.606635 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93933999-13bd-459a-a885-07cc77031a9f","Type":"ContainerStarted","Data":"37de49fffc48eed519cfce22635dc674c3b79ccc587ae9e708c5d5cd98431e70"} Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.696263 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-c24g9"] Feb 27 17:16:47 crc kubenswrapper[4708]: I0227 17:16:47.782825 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-d59d57f6-95wt9"] Feb 27 17:16:47 crc kubenswrapper[4708]: W0227 17:16:47.794302 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b006312_c735_4397_96d7_0f742b67af82.slice/crio-5df52fac3c06a8ed094eb5a143e6d967e7fbf60188c8d864cd540cf067521d85 WatchSource:0}: Error finding container 5df52fac3c06a8ed094eb5a143e6d967e7fbf60188c8d864cd540cf067521d85: Status 404 returned error can't find the container with id 5df52fac3c06a8ed094eb5a143e6d967e7fbf60188c8d864cd540cf067521d85 Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.039380 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-556cb97757-rbj2s"] Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.359437 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.623030 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-556cb97757-rbj2s" event={"ID":"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5","Type":"ContainerStarted","Data":"dc90c89f14a9541d16a137f0592f2d66bf4982a4aae577943ba5c27d252731ad"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.625572 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgfws" event={"ID":"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a","Type":"ContainerStarted","Data":"cd7d77a1074bf8e22de44a3980b43c0f070d4ec56a36e904dfa86ad25063becc"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.633905 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"664c1ac7-370e-452a-b7a9-ac087d06cfc9","Type":"ContainerStarted","Data":"8fb098da6beacbc3f24ce1551dc9f997a372851b1a01db932e5b9da5cf8af853"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.635335 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d047b4cb-8a38-4b0b-b667-0b78aeb2a166","Type":"ContainerStarted","Data":"128b26bd67d7107d59156662763792b8fd1281bd074fbdebafa8650b6a50ce0f"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.636905 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93933999-13bd-459a-a885-07cc77031a9f","Type":"ContainerStarted","Data":"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.640155 4708 generic.go:334] "Generic (PLEG): container finished" podID="5e596e79-d862-49bc-b016-afaaab6828f8" containerID="1bc4eec3450571587e9296a9da6256270cec6c873fe2591f4e8e85a8da8e8bed" exitCode=0 Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.640203 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-c24g9" event={"ID":"5e596e79-d862-49bc-b016-afaaab6828f8","Type":"ContainerDied","Data":"1bc4eec3450571587e9296a9da6256270cec6c873fe2591f4e8e85a8da8e8bed"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.640218 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-c24g9" event={"ID":"5e596e79-d862-49bc-b016-afaaab6828f8","Type":"ContainerStarted","Data":"ece491d5a7461d06b4c08ea6aa9a369e229cd6f07e5d2fa6a09b72e56fda59bf"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.657435 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"e6ae868232176bca0302efbb247dc043bdcb08064d471f4d51ca9cbdc5fea53b"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.657484 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"463ecaa7b384f293ae53366b150b9fb7ab2954fc985ad82bd02fc4fda2886b8b"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.661950 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-jgfws" podStartSLOduration=13.335745906 podStartE2EDuration="41.66192729s" podCreationTimestamp="2026-02-27 17:16:07 +0000 UTC" firstStartedPulling="2026-02-27 17:16:08.731683192 +0000 UTC m=+1367.247480779" lastFinishedPulling="2026-02-27 17:16:37.057864566 +0000 UTC m=+1395.573662163" observedRunningTime="2026-02-27 17:16:48.645647638 +0000 UTC m=+1407.161445225" watchObservedRunningTime="2026-02-27 17:16:48.66192729 +0000 UTC m=+1407.177724877" Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.666819 4708 generic.go:334] "Generic (PLEG): container finished" podID="ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" containerID="5182c17c490fc6dc9c93399455f07911595254dac7483fdc32264be2de6a42b2" exitCode=0 Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.666905 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" event={"ID":"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2","Type":"ContainerDied","Data":"5182c17c490fc6dc9c93399455f07911595254dac7483fdc32264be2de6a42b2"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.691872 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzgr7" event={"ID":"108a278a-0da6-4e63-be97-cea8279e7c99","Type":"ContainerStarted","Data":"5942bf776f7d55da9c41b01d332b9a021c1833367facdd5dd4e3040e1cc4047d"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.721254 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d59d57f6-95wt9" event={"ID":"0b006312-c735-4397-96d7-0f742b67af82","Type":"ContainerStarted","Data":"82b628f51d5c712d7c99021fcb12ac29169fe79378d581b1d4fc244839d3b797"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.721308 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d59d57f6-95wt9" event={"ID":"0b006312-c735-4397-96d7-0f742b67af82","Type":"ContainerStarted","Data":"5df52fac3c06a8ed094eb5a143e6d967e7fbf60188c8d864cd540cf067521d85"} Feb 27 17:16:48 crc kubenswrapper[4708]: I0227 17:16:48.755794 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-lzgr7" podStartSLOduration=15.755771382 podStartE2EDuration="15.755771382s" podCreationTimestamp="2026-02-27 17:16:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:48.729288461 +0000 UTC m=+1407.245086048" watchObservedRunningTime="2026-02-27 17:16:48.755771382 +0000 UTC m=+1407.271568969" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.495304 4708 scope.go:117] "RemoveContainer" containerID="2e1d1d6696a81e89844f170efb76497881717839f08def2e50b3d046e8135816" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.642075 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.758664 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-sb\") pod \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.758730 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-nb\") pod \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.758872 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-config\") pod \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.758896 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-dns-svc\") pod \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.758939 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t6djq\" (UniqueName: \"kubernetes.io/projected/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-kube-api-access-t6djq\") pod \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\" (UID: \"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2\") " Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.775735 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"664c1ac7-370e-452a-b7a9-ac087d06cfc9","Type":"ContainerStarted","Data":"859ef949014699cf97baaa39179da9132955cd9306320ad1347a6d97e2cc4aaf"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.803212 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d59d57f6-95wt9" event={"ID":"0b006312-c735-4397-96d7-0f742b67af82","Type":"ContainerStarted","Data":"a2c31d3d0e0748b42c1e554b43420c60869f6ee7afbf8eff1040d8d11eaf06ac"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.803279 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.808495 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-kube-api-access-t6djq" (OuterVolumeSpecName: "kube-api-access-t6djq") pod "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" (UID: "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2"). InnerVolumeSpecName "kube-api-access-t6djq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.825860 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" (UID: "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.852949 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-d59d57f6-95wt9" podStartSLOduration=6.852921567 podStartE2EDuration="6.852921567s" podCreationTimestamp="2026-02-27 17:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:49.840879855 +0000 UTC m=+1408.356677442" watchObservedRunningTime="2026-02-27 17:16:49.852921567 +0000 UTC m=+1408.368719174" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.858919 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93933999-13bd-459a-a885-07cc77031a9f","Type":"ContainerStarted","Data":"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.859089 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-log" containerID="cri-o://d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783" gracePeriod=30 Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.859219 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-httpd" containerID="cri-o://0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d" gracePeriod=30 Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.861267 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" (UID: "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.861944 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t6djq\" (UniqueName: \"kubernetes.io/projected/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-kube-api-access-t6djq\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.861957 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.861972 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.870968 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"c9a49a941ae58577fb75f620a467ad92a37eb6e4a5683b20a1d19a209161611e"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.871009 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"2c394102883bb3d41cca1c096bc259fe35a952efc252d2931dc28abd8a0f1abc"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.919027 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.919002532 podStartE2EDuration="9.919002532s" podCreationTimestamp="2026-02-27 17:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:49.882549808 +0000 UTC m=+1408.398347395" watchObservedRunningTime="2026-02-27 17:16:49.919002532 +0000 UTC m=+1408.434800119" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.920454 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" (UID: "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.920956 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-config" (OuterVolumeSpecName: "config") pod "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" (UID: "ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.933908 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-c24g9" event={"ID":"5e596e79-d862-49bc-b016-afaaab6828f8","Type":"ContainerStarted","Data":"0f28d3f200e4a1c615d8818382e481e4969daacbb688eafb8bb06c1d1bd0cfae"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.933944 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.955508 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" event={"ID":"ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2","Type":"ContainerDied","Data":"bc60b6d1945de0bfd0e862c744e4665847c254efb1c4153708a9eb6e70d69309"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.955562 4708 scope.go:117] "RemoveContainer" containerID="5182c17c490fc6dc9c93399455f07911595254dac7483fdc32264be2de6a42b2" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.955716 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-f84976bdf-6jhl5" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.964631 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.964664 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.965153 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-fb745b69-c24g9" podStartSLOduration=6.965131821 podStartE2EDuration="6.965131821s" podCreationTimestamp="2026-02-27 17:16:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:49.953957363 +0000 UTC m=+1408.469754951" watchObservedRunningTime="2026-02-27 17:16:49.965131821 +0000 UTC m=+1408.480929418" Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.978292 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-556cb97757-rbj2s" event={"ID":"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5","Type":"ContainerStarted","Data":"d55a7a08666fab43e70b497c7e6ef9b5949f9e5559045907eb71973edbc42ae8"} Feb 27 17:16:49 crc kubenswrapper[4708]: I0227 17:16:49.979183 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.050157 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-556cb97757-rbj2s" podStartSLOduration=4.050136662 podStartE2EDuration="4.050136662s" podCreationTimestamp="2026-02-27 17:16:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:50.017425144 +0000 UTC m=+1408.533222731" watchObservedRunningTime="2026-02-27 17:16:50.050136662 +0000 UTC m=+1408.565934239" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.088072 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-6jhl5"] Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.090363 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-f84976bdf-6jhl5"] Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.240932 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" path="/var/lib/kubelet/pods/ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2/volumes" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.591340 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.782908 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8f92l\" (UniqueName: \"kubernetes.io/projected/93933999-13bd-459a-a885-07cc77031a9f-kube-api-access-8f92l\") pod \"93933999-13bd-459a-a885-07cc77031a9f\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.783293 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-combined-ca-bundle\") pod \"93933999-13bd-459a-a885-07cc77031a9f\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.783471 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-config-data\") pod \"93933999-13bd-459a-a885-07cc77031a9f\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.783599 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-scripts\") pod \"93933999-13bd-459a-a885-07cc77031a9f\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.783787 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"93933999-13bd-459a-a885-07cc77031a9f\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.783841 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-logs\") pod \"93933999-13bd-459a-a885-07cc77031a9f\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.783871 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-httpd-run\") pod \"93933999-13bd-459a-a885-07cc77031a9f\" (UID: \"93933999-13bd-459a-a885-07cc77031a9f\") " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.784282 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-logs" (OuterVolumeSpecName: "logs") pod "93933999-13bd-459a-a885-07cc77031a9f" (UID: "93933999-13bd-459a-a885-07cc77031a9f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.784473 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "93933999-13bd-459a-a885-07cc77031a9f" (UID: "93933999-13bd-459a-a885-07cc77031a9f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.790978 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93933999-13bd-459a-a885-07cc77031a9f-kube-api-access-8f92l" (OuterVolumeSpecName: "kube-api-access-8f92l") pod "93933999-13bd-459a-a885-07cc77031a9f" (UID: "93933999-13bd-459a-a885-07cc77031a9f"). InnerVolumeSpecName "kube-api-access-8f92l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.804962 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-scripts" (OuterVolumeSpecName: "scripts") pod "93933999-13bd-459a-a885-07cc77031a9f" (UID: "93933999-13bd-459a-a885-07cc77031a9f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.833273 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "93933999-13bd-459a-a885-07cc77031a9f" (UID: "93933999-13bd-459a-a885-07cc77031a9f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.848664 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e" (OuterVolumeSpecName: "glance") pod "93933999-13bd-459a-a885-07cc77031a9f" (UID: "93933999-13bd-459a-a885-07cc77031a9f"). InnerVolumeSpecName "pvc-391ff05f-bf42-4781-89df-7a3aa774575e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.854764 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-config-data" (OuterVolumeSpecName: "config-data") pod "93933999-13bd-459a-a885-07cc77031a9f" (UID: "93933999-13bd-459a-a885-07cc77031a9f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.885604 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.885629 4708 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/93933999-13bd-459a-a885-07cc77031a9f-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.885640 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8f92l\" (UniqueName: \"kubernetes.io/projected/93933999-13bd-459a-a885-07cc77031a9f-kube-api-access-8f92l\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.885649 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.885658 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.885666 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/93933999-13bd-459a-a885-07cc77031a9f-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.885697 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") on node \"crc\" " Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.910130 4708 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.910284 4708 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-391ff05f-bf42-4781-89df-7a3aa774575e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e") on node "crc" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.988965 4708 reconciler_common.go:293] "Volume detached for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.989060 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"664c1ac7-370e-452a-b7a9-ac087d06cfc9","Type":"ContainerStarted","Data":"fb5c5ef01c0e77bf5a71824c510c4b52218a8529e4f87997661080cc340ddaa9"} Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.989183 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-log" containerID="cri-o://859ef949014699cf97baaa39179da9132955cd9306320ad1347a6d97e2cc4aaf" gracePeriod=30 Feb 27 17:16:50 crc kubenswrapper[4708]: I0227 17:16:50.989255 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-httpd" containerID="cri-o://fb5c5ef01c0e77bf5a71824c510c4b52218a8529e4f87997661080cc340ddaa9" gracePeriod=30 Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.005076 4708 generic.go:334] "Generic (PLEG): container finished" podID="93933999-13bd-459a-a885-07cc77031a9f" containerID="0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d" exitCode=143 Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.005103 4708 generic.go:334] "Generic (PLEG): container finished" podID="93933999-13bd-459a-a885-07cc77031a9f" containerID="d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783" exitCode=143 Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.005118 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.005192 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93933999-13bd-459a-a885-07cc77031a9f","Type":"ContainerDied","Data":"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d"} Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.005219 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93933999-13bd-459a-a885-07cc77031a9f","Type":"ContainerDied","Data":"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783"} Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.005230 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"93933999-13bd-459a-a885-07cc77031a9f","Type":"ContainerDied","Data":"37de49fffc48eed519cfce22635dc674c3b79ccc587ae9e708c5d5cd98431e70"} Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.005252 4708 scope.go:117] "RemoveContainer" containerID="0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.012194 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=11.012179385 podStartE2EDuration="11.012179385s" podCreationTimestamp="2026-02-27 17:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:16:51.007648597 +0000 UTC m=+1409.523446184" watchObservedRunningTime="2026-02-27 17:16:51.012179385 +0000 UTC m=+1409.527976972" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.024533 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-556cb97757-rbj2s" event={"ID":"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5","Type":"ContainerStarted","Data":"ce682dc09e4d5f957a18d41066317899c0c870b411c993a2c48812f3a73ea7e1"} Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.043540 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.050769 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.064068 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:51 crc kubenswrapper[4708]: E0227 17:16:51.064436 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-log" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.064447 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-log" Feb 27 17:16:51 crc kubenswrapper[4708]: E0227 17:16:51.064467 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-httpd" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.064473 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-httpd" Feb 27 17:16:51 crc kubenswrapper[4708]: E0227 17:16:51.064487 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" containerName="init" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.064493 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" containerName="init" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.064698 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-log" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.064732 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="ffb7a8c2-55ed-43f9-88c4-a7ebc76f57b2" containerName="init" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.064747 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="93933999-13bd-459a-a885-07cc77031a9f" containerName="glance-httpd" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.065708 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.067616 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.075486 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.075923 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.193841 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-logs\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.193922 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.193939 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.193975 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-config-data\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.199159 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.199282 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.199417 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-scripts\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.199462 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj79r\" (UniqueName: \"kubernetes.io/projected/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-kube-api-access-bj79r\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301219 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-logs\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301277 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301304 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301332 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-config-data\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301417 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301472 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-scripts\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.301492 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bj79r\" (UniqueName: \"kubernetes.io/projected/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-kube-api-access-bj79r\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.302121 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-logs\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.304259 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.306547 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.306575 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/852cd9e461d89b39e32be31d4cb707ef1d2abb65b96de01c0d2dcb097d159f7c/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.311055 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.311448 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-config-data\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.314483 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-scripts\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.321415 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.321965 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bj79r\" (UniqueName: \"kubernetes.io/projected/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-kube-api-access-bj79r\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.354908 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.381310 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:16:51 crc kubenswrapper[4708]: I0227 17:16:51.509690 4708 scope.go:117] "RemoveContainer" containerID="d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.081033 4708 generic.go:334] "Generic (PLEG): container finished" podID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerID="fb5c5ef01c0e77bf5a71824c510c4b52218a8529e4f87997661080cc340ddaa9" exitCode=0 Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.081065 4708 generic.go:334] "Generic (PLEG): container finished" podID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerID="859ef949014699cf97baaa39179da9132955cd9306320ad1347a6d97e2cc4aaf" exitCode=143 Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.081115 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"664c1ac7-370e-452a-b7a9-ac087d06cfc9","Type":"ContainerDied","Data":"fb5c5ef01c0e77bf5a71824c510c4b52218a8529e4f87997661080cc340ddaa9"} Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.081141 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"664c1ac7-370e-452a-b7a9-ac087d06cfc9","Type":"ContainerDied","Data":"859ef949014699cf97baaa39179da9132955cd9306320ad1347a6d97e2cc4aaf"} Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.140982 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.233064 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-combined-ca-bundle\") pod \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.233231 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.233255 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-scripts\") pod \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.233296 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-logs\") pod \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.233316 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-httpd-run\") pod \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.233335 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqq4w\" (UniqueName: \"kubernetes.io/projected/664c1ac7-370e-452a-b7a9-ac087d06cfc9-kube-api-access-dqq4w\") pod \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.233360 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-config-data\") pod \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\" (UID: \"664c1ac7-370e-452a-b7a9-ac087d06cfc9\") " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.236232 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "664c1ac7-370e-452a-b7a9-ac087d06cfc9" (UID: "664c1ac7-370e-452a-b7a9-ac087d06cfc9"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.236392 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-logs" (OuterVolumeSpecName: "logs") pod "664c1ac7-370e-452a-b7a9-ac087d06cfc9" (UID: "664c1ac7-370e-452a-b7a9-ac087d06cfc9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.244920 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/664c1ac7-370e-452a-b7a9-ac087d06cfc9-kube-api-access-dqq4w" (OuterVolumeSpecName: "kube-api-access-dqq4w") pod "664c1ac7-370e-452a-b7a9-ac087d06cfc9" (UID: "664c1ac7-370e-452a-b7a9-ac087d06cfc9"). InnerVolumeSpecName "kube-api-access-dqq4w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.244988 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-scripts" (OuterVolumeSpecName: "scripts") pod "664c1ac7-370e-452a-b7a9-ac087d06cfc9" (UID: "664c1ac7-370e-452a-b7a9-ac087d06cfc9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.260374 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93933999-13bd-459a-a885-07cc77031a9f" path="/var/lib/kubelet/pods/93933999-13bd-459a-a885-07cc77031a9f/volumes" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.302978 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "664c1ac7-370e-452a-b7a9-ac087d06cfc9" (UID: "664c1ac7-370e-452a-b7a9-ac087d06cfc9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.335382 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.335409 4708 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/664c1ac7-370e-452a-b7a9-ac087d06cfc9-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.335419 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqq4w\" (UniqueName: \"kubernetes.io/projected/664c1ac7-370e-452a-b7a9-ac087d06cfc9-kube-api-access-dqq4w\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.335430 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.335438 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.403055 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59" (OuterVolumeSpecName: "glance") pod "664c1ac7-370e-452a-b7a9-ac087d06cfc9" (UID: "664c1ac7-370e-452a-b7a9-ac087d06cfc9"). InnerVolumeSpecName "pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.407496 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-config-data" (OuterVolumeSpecName: "config-data") pod "664c1ac7-370e-452a-b7a9-ac087d06cfc9" (UID: "664c1ac7-370e-452a-b7a9-ac087d06cfc9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.437948 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/664c1ac7-370e-452a-b7a9-ac087d06cfc9-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.437997 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") on node \"crc\" " Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.466867 4708 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.467014 4708 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59") on node "crc" Feb 27 17:16:52 crc kubenswrapper[4708]: I0227 17:16:52.539289 4708 reconciler_common.go:293] "Volume detached for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") on node \"crc\" DevicePath \"\"" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.098388 4708 generic.go:334] "Generic (PLEG): container finished" podID="108a278a-0da6-4e63-be97-cea8279e7c99" containerID="5942bf776f7d55da9c41b01d332b9a021c1833367facdd5dd4e3040e1cc4047d" exitCode=0 Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.098471 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzgr7" event={"ID":"108a278a-0da6-4e63-be97-cea8279e7c99","Type":"ContainerDied","Data":"5942bf776f7d55da9c41b01d332b9a021c1833367facdd5dd4e3040e1cc4047d"} Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.101875 4708 generic.go:334] "Generic (PLEG): container finished" podID="9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" containerID="cd7d77a1074bf8e22de44a3980b43c0f070d4ec56a36e904dfa86ad25063becc" exitCode=0 Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.101931 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgfws" event={"ID":"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a","Type":"ContainerDied","Data":"cd7d77a1074bf8e22de44a3980b43c0f070d4ec56a36e904dfa86ad25063becc"} Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.104228 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"664c1ac7-370e-452a-b7a9-ac087d06cfc9","Type":"ContainerDied","Data":"8fb098da6beacbc3f24ce1551dc9f997a372851b1a01db932e5b9da5cf8af853"} Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.104276 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.174113 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.189440 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.189493 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:53 crc kubenswrapper[4708]: E0227 17:16:53.189874 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-httpd" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.189890 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-httpd" Feb 27 17:16:53 crc kubenswrapper[4708]: E0227 17:16:53.189920 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-log" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.189927 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-log" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.190134 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-log" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.190156 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" containerName="glance-httpd" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.191091 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.214288 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.214478 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.235507 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.357240 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.357406 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.358484 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.358833 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tc4x\" (UniqueName: \"kubernetes.io/projected/d6d082cd-70c3-4ee1-9675-294347882c7d-kube-api-access-5tc4x\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.359156 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.359400 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.359511 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.359596 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461258 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461328 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461410 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461463 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5tc4x\" (UniqueName: \"kubernetes.io/projected/d6d082cd-70c3-4ee1-9675-294347882c7d-kube-api-access-5tc4x\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461495 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461515 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461539 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461562 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.461838 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.463341 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-logs\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.465433 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.465458 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6cf9b68842a44daba5610601208ef38850856ede1b5f40d133ba6995034e3af2/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.466743 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-scripts\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.470507 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.470975 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-config-data\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.483175 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.485724 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5tc4x\" (UniqueName: \"kubernetes.io/projected/d6d082cd-70c3-4ee1-9675-294347882c7d-kube-api-access-5tc4x\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.515811 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:16:53 crc kubenswrapper[4708]: I0227 17:16:53.545651 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:16:54 crc kubenswrapper[4708]: I0227 17:16:54.245608 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="664c1ac7-370e-452a-b7a9-ac087d06cfc9" path="/var/lib/kubelet/pods/664c1ac7-370e-452a-b7a9-ac087d06cfc9/volumes" Feb 27 17:16:58 crc kubenswrapper[4708]: I0227 17:16:58.511042 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:16:58 crc kubenswrapper[4708]: I0227 17:16:58.605561 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hzj5k"] Feb 27 17:16:58 crc kubenswrapper[4708]: I0227 17:16:58.605791 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerName="dnsmasq-dns" containerID="cri-o://c4fc70eb344877614cd7b1bf19cc6a2fd4b8f3b0ee9dd7ddb68b6e1ce272e2ca" gracePeriod=10 Feb 27 17:16:59 crc kubenswrapper[4708]: I0227 17:16:59.169313 4708 generic.go:334] "Generic (PLEG): container finished" podID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerID="c4fc70eb344877614cd7b1bf19cc6a2fd4b8f3b0ee9dd7ddb68b6e1ce272e2ca" exitCode=0 Feb 27 17:16:59 crc kubenswrapper[4708]: I0227 17:16:59.169411 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" event={"ID":"60cc75e4-619b-4cb8-a663-3214b22f2b43","Type":"ContainerDied","Data":"c4fc70eb344877614cd7b1bf19cc6a2fd4b8f3b0ee9dd7ddb68b6e1ce272e2ca"} Feb 27 17:17:02 crc kubenswrapper[4708]: I0227 17:17:02.977931 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.167:5353: connect: connection refused" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.700932 4708 scope.go:117] "RemoveContainer" containerID="0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d" Feb 27 17:17:03 crc kubenswrapper[4708]: E0227 17:17:03.701510 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d\": container with ID starting with 0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d not found: ID does not exist" containerID="0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.701585 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d"} err="failed to get container status \"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d\": rpc error: code = NotFound desc = could not find container \"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d\": container with ID starting with 0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d not found: ID does not exist" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.701626 4708 scope.go:117] "RemoveContainer" containerID="d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783" Feb 27 17:17:03 crc kubenswrapper[4708]: E0227 17:17:03.702345 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783\": container with ID starting with d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783 not found: ID does not exist" containerID="d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.702436 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783"} err="failed to get container status \"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783\": rpc error: code = NotFound desc = could not find container \"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783\": container with ID starting with d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783 not found: ID does not exist" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.702496 4708 scope.go:117] "RemoveContainer" containerID="0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.703141 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d"} err="failed to get container status \"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d\": rpc error: code = NotFound desc = could not find container \"0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d\": container with ID starting with 0ef2a3dbb47b4dd0b48dff74e67977ec7b405bf78387aaa09a4ca20228e4ac9d not found: ID does not exist" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.703190 4708 scope.go:117] "RemoveContainer" containerID="d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.703713 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783"} err="failed to get container status \"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783\": rpc error: code = NotFound desc = could not find container \"d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783\": container with ID starting with d10026bb480eb5426a95dd9a88a77420ea518bae495b6c393eb7311d286b9783 not found: ID does not exist" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.703772 4708 scope.go:117] "RemoveContainer" containerID="fb5c5ef01c0e77bf5a71824c510c4b52218a8529e4f87997661080cc340ddaa9" Feb 27 17:17:03 crc kubenswrapper[4708]: E0227 17:17:03.749869 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/sg-core:latest" Feb 27 17:17:03 crc kubenswrapper[4708]: E0227 17:17:03.750167 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:sg-core,Image:quay.io/openstack-k8s-operators/sg-core:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:sg-core-conf-yaml,ReadOnly:false,MountPath:/etc/sg-core.conf.yaml,SubPath:sg-core.conf.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppg6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ceilometer-0_openstack(d047b4cb-8a38-4b0b-b667-0b78aeb2a166): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.913720 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgfws" Feb 27 17:17:03 crc kubenswrapper[4708]: I0227 17:17:03.919359 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.003038 4708 scope.go:117] "RemoveContainer" containerID="859ef949014699cf97baaa39179da9132955cd9306320ad1347a6d97e2cc4aaf" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079039 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-scripts\") pod \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079386 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-npjt2\" (UniqueName: \"kubernetes.io/projected/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-kube-api-access-npjt2\") pod \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079491 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-scripts\") pod \"108a278a-0da6-4e63-be97-cea8279e7c99\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079522 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-config-data\") pod \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079547 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-fernet-keys\") pod \"108a278a-0da6-4e63-be97-cea8279e7c99\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079581 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-combined-ca-bundle\") pod \"108a278a-0da6-4e63-be97-cea8279e7c99\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079626 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-combined-ca-bundle\") pod \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079706 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-logs\") pod \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\" (UID: \"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079739 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-credential-keys\") pod \"108a278a-0da6-4e63-be97-cea8279e7c99\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079755 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-config-data\") pod \"108a278a-0da6-4e63-be97-cea8279e7c99\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.079795 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgb6c\" (UniqueName: \"kubernetes.io/projected/108a278a-0da6-4e63-be97-cea8279e7c99-kube-api-access-cgb6c\") pod \"108a278a-0da6-4e63-be97-cea8279e7c99\" (UID: \"108a278a-0da6-4e63-be97-cea8279e7c99\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.082497 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-logs" (OuterVolumeSpecName: "logs") pod "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" (UID: "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.095161 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "108a278a-0da6-4e63-be97-cea8279e7c99" (UID: "108a278a-0da6-4e63-be97-cea8279e7c99"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.095337 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-kube-api-access-npjt2" (OuterVolumeSpecName: "kube-api-access-npjt2") pod "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" (UID: "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a"). InnerVolumeSpecName "kube-api-access-npjt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.095400 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/108a278a-0da6-4e63-be97-cea8279e7c99-kube-api-access-cgb6c" (OuterVolumeSpecName: "kube-api-access-cgb6c") pod "108a278a-0da6-4e63-be97-cea8279e7c99" (UID: "108a278a-0da6-4e63-be97-cea8279e7c99"). InnerVolumeSpecName "kube-api-access-cgb6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.095439 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-scripts" (OuterVolumeSpecName: "scripts") pod "108a278a-0da6-4e63-be97-cea8279e7c99" (UID: "108a278a-0da6-4e63-be97-cea8279e7c99"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.102035 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-scripts" (OuterVolumeSpecName: "scripts") pod "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" (UID: "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.102054 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "108a278a-0da6-4e63-be97-cea8279e7c99" (UID: "108a278a-0da6-4e63-be97-cea8279e7c99"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.148370 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" (UID: "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.152028 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "108a278a-0da6-4e63-be97-cea8279e7c99" (UID: "108a278a-0da6-4e63-be97-cea8279e7c99"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.157070 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-config-data" (OuterVolumeSpecName: "config-data") pod "108a278a-0da6-4e63-be97-cea8279e7c99" (UID: "108a278a-0da6-4e63-be97-cea8279e7c99"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.173835 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-config-data" (OuterVolumeSpecName: "config-data") pod "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" (UID: "9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181333 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181371 4708 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-credential-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181382 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181391 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgb6c\" (UniqueName: \"kubernetes.io/projected/108a278a-0da6-4e63-be97-cea8279e7c99-kube-api-access-cgb6c\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181403 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181412 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-npjt2\" (UniqueName: \"kubernetes.io/projected/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-kube-api-access-npjt2\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181420 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181428 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181436 4708 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181444 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/108a278a-0da6-4e63-be97-cea8279e7c99-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.181452 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.219147 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.283988 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-jgfws" event={"ID":"9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a","Type":"ContainerDied","Data":"8ee05cb3b59904fde3fc3ef8ac349ba6d7930567f6bb199221d5db37ce02a02b"} Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.284025 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8ee05cb3b59904fde3fc3ef8ac349ba6d7930567f6bb199221d5db37ce02a02b" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.284096 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-jgfws" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.302934 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" event={"ID":"60cc75e4-619b-4cb8-a663-3214b22f2b43","Type":"ContainerDied","Data":"3a0c2850e6c4975cdf610cd79927840e6f46a5d4b9550bfbef11253d23af5d66"} Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.302996 4708 scope.go:117] "RemoveContainer" containerID="c4fc70eb344877614cd7b1bf19cc6a2fd4b8f3b0ee9dd7ddb68b6e1ce272e2ca" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.303122 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-68dcc9cf6f-hzj5k" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.316316 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-lzgr7" event={"ID":"108a278a-0da6-4e63-be97-cea8279e7c99","Type":"ContainerDied","Data":"04fd7c7c397a6489e24666ff67ff0b893fb3963172b9a40b1517839429ccedc4"} Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.316352 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04fd7c7c397a6489e24666ff67ff0b893fb3963172b9a40b1517839429ccedc4" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.316406 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-lzgr7" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.344025 4708 scope.go:117] "RemoveContainer" containerID="d540d71d9b8916e4c9b64513cde7622af353797043d549c4855fbe423659a731" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.391493 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.392099 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-nb\") pod \"60cc75e4-619b-4cb8-a663-3214b22f2b43\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.392202 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-dns-svc\") pod \"60cc75e4-619b-4cb8-a663-3214b22f2b43\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.392257 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88j8k\" (UniqueName: \"kubernetes.io/projected/60cc75e4-619b-4cb8-a663-3214b22f2b43-kube-api-access-88j8k\") pod \"60cc75e4-619b-4cb8-a663-3214b22f2b43\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.392317 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-config\") pod \"60cc75e4-619b-4cb8-a663-3214b22f2b43\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.392432 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-sb\") pod \"60cc75e4-619b-4cb8-a663-3214b22f2b43\" (UID: \"60cc75e4-619b-4cb8-a663-3214b22f2b43\") " Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.405012 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60cc75e4-619b-4cb8-a663-3214b22f2b43-kube-api-access-88j8k" (OuterVolumeSpecName: "kube-api-access-88j8k") pod "60cc75e4-619b-4cb8-a663-3214b22f2b43" (UID: "60cc75e4-619b-4cb8-a663-3214b22f2b43"). InnerVolumeSpecName "kube-api-access-88j8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.438335 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "60cc75e4-619b-4cb8-a663-3214b22f2b43" (UID: "60cc75e4-619b-4cb8-a663-3214b22f2b43"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.444239 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "60cc75e4-619b-4cb8-a663-3214b22f2b43" (UID: "60cc75e4-619b-4cb8-a663-3214b22f2b43"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.450014 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "60cc75e4-619b-4cb8-a663-3214b22f2b43" (UID: "60cc75e4-619b-4cb8-a663-3214b22f2b43"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.459478 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-config" (OuterVolumeSpecName: "config") pod "60cc75e4-619b-4cb8-a663-3214b22f2b43" (UID: "60cc75e4-619b-4cb8-a663-3214b22f2b43"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.497531 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.497559 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.497570 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88j8k\" (UniqueName: \"kubernetes.io/projected/60cc75e4-619b-4cb8-a663-3214b22f2b43-kube-api-access-88j8k\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.497579 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.497587 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/60cc75e4-619b-4cb8-a663-3214b22f2b43-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.666988 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hzj5k"] Feb 27 17:17:04 crc kubenswrapper[4708]: I0227 17:17:04.682451 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-68dcc9cf6f-hzj5k"] Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.014097 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-6d5895b968-p7cts"] Feb 27 17:17:05 crc kubenswrapper[4708]: E0227 17:17:05.014788 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" containerName="placement-db-sync" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.014800 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" containerName="placement-db-sync" Feb 27 17:17:05 crc kubenswrapper[4708]: E0227 17:17:05.014817 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="108a278a-0da6-4e63-be97-cea8279e7c99" containerName="keystone-bootstrap" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.014823 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="108a278a-0da6-4e63-be97-cea8279e7c99" containerName="keystone-bootstrap" Feb 27 17:17:05 crc kubenswrapper[4708]: E0227 17:17:05.014857 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerName="init" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.014864 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerName="init" Feb 27 17:17:05 crc kubenswrapper[4708]: E0227 17:17:05.014887 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerName="dnsmasq-dns" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.014893 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerName="dnsmasq-dns" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.015067 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" containerName="dnsmasq-dns" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.015079 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" containerName="placement-db-sync" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.015092 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="108a278a-0da6-4e63-be97-cea8279e7c99" containerName="keystone-bootstrap" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.016160 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.018568 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-rzhr2" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.018720 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.018918 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.018948 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.019030 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.036556 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d5895b968-p7cts"] Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.108567 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-597b655d8b-dmxbr"] Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.114036 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.116141 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.116289 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.118092 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-shlxn" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.118213 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-597b655d8b-dmxbr"] Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.118236 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.119175 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.119737 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.208578 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-scripts\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.209036 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-combined-ca-bundle\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.209096 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-internal-tls-certs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.209178 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-public-tls-certs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.209214 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8ntt\" (UniqueName: \"kubernetes.io/projected/5177dfe3-b55f-4a39-9a6b-392796ed3084-kube-api-access-r8ntt\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.209270 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-config-data\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.209306 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5177dfe3-b55f-4a39-9a6b-392796ed3084-logs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311062 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-public-tls-certs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311121 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8ntt\" (UniqueName: \"kubernetes.io/projected/5177dfe3-b55f-4a39-9a6b-392796ed3084-kube-api-access-r8ntt\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311143 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-config-data\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311181 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-config-data\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311211 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5177dfe3-b55f-4a39-9a6b-392796ed3084-logs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311229 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-internal-tls-certs\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311269 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-fernet-keys\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311287 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-public-tls-certs\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311319 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-combined-ca-bundle\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311334 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-credential-keys\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311359 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-scripts\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311377 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-combined-ca-bundle\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311416 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-internal-tls-certs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311438 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-scripts\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.311477 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4nv\" (UniqueName: \"kubernetes.io/projected/8d31c043-7a1b-4030-aa89-ccf8a23a766b-kube-api-access-dw4nv\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.312003 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5177dfe3-b55f-4a39-9a6b-392796ed3084-logs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.316002 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-combined-ca-bundle\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.317766 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-scripts\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.320233 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-internal-tls-certs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.322575 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-public-tls-certs\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.330444 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-config-data\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.335300 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8ntt\" (UniqueName: \"kubernetes.io/projected/5177dfe3-b55f-4a39-9a6b-392796ed3084-kube-api-access-r8ntt\") pod \"placement-6d5895b968-p7cts\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.335914 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.346031 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lhfzc" event={"ID":"76e1fee2-5549-44d4-aaab-c70ad0fb083e","Type":"ContainerStarted","Data":"88fec4da7b80600e36ed3573e1898c2c90c1850824d336ef91df8763e551a0db"} Feb 27 17:17:05 crc kubenswrapper[4708]: W0227 17:17:05.352083 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695dc1d6_e0a6_4f40_b7aa_af1c5f49f134.slice/crio-3a318d4a0fb2276f84e28162129a3d0e7b994590a79831e98b0237edc4b523bc WatchSource:0}: Error finding container 3a318d4a0fb2276f84e28162129a3d0e7b994590a79831e98b0237edc4b523bc: Status 404 returned error can't find the container with id 3a318d4a0fb2276f84e28162129a3d0e7b994590a79831e98b0237edc4b523bc Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.359363 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6d082cd-70c3-4ee1-9675-294347882c7d","Type":"ContainerStarted","Data":"c0ed96637e848a67aa41fb01c560f7c5d9659c8953c083017454fd907b1a3a07"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.359394 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.359406 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6d082cd-70c3-4ee1-9675-294347882c7d","Type":"ContainerStarted","Data":"1ed0815f99cea9e28e9772e35c6f330ccc07f5e4d6be2c574f9ecff309e0b66d"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.368261 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-lhfzc" podStartSLOduration=3.2820393279999998 podStartE2EDuration="58.368227634s" podCreationTimestamp="2026-02-27 17:16:07 +0000 UTC" firstStartedPulling="2026-02-27 17:16:08.917968437 +0000 UTC m=+1367.433766024" lastFinishedPulling="2026-02-27 17:17:04.004156733 +0000 UTC m=+1422.519954330" observedRunningTime="2026-02-27 17:17:05.36343666 +0000 UTC m=+1423.879234247" watchObservedRunningTime="2026-02-27 17:17:05.368227634 +0000 UTC m=+1423.884025221" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.394628 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"5b9413126b41e16eeb4c36c22788fd075de94f7e2eacee2a3c5ec84f9f2c953e"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.394699 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"6b85b7d0c11c730fa9bf8866346f76af61cd2b4e79ab68b956df77bbdd66fbd5"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.394729 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"326bcd3c619fcea38efcc29ce37b6353b8a7ded60d1caab0d02e5d3f2e0e2d9a"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.394742 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"9a97649f8bcf5eb30c4814769085fcd07682a6ca988d58e68f9059abcbbeb3df"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.396806 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s4ckm" event={"ID":"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5","Type":"ContainerStarted","Data":"ae9b64a7309db4fedfe9919e36d91908e6101b9c6814fb46d8e7a3371b045372"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.401190 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggwzp" event={"ID":"dd272ccd-a2cc-433f-80bf-96134126ce6b","Type":"ContainerStarted","Data":"50ac60033f97c37889971727e8a28f27504bd0050cd60d78aaf4f010b9c23ef4"} Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413444 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-config-data\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413539 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-internal-tls-certs\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413589 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-fernet-keys\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413615 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-public-tls-certs\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413647 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-combined-ca-bundle\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413664 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-credential-keys\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413720 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-scripts\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.413789 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw4nv\" (UniqueName: \"kubernetes.io/projected/8d31c043-7a1b-4030-aa89-ccf8a23a766b-kube-api-access-dw4nv\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.418977 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-fernet-keys\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.423404 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-internal-tls-certs\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.439360 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-s4ckm" podStartSLOduration=3.168166128 podStartE2EDuration="58.439343623s" podCreationTimestamp="2026-02-27 17:16:07 +0000 UTC" firstStartedPulling="2026-02-27 17:16:08.732014901 +0000 UTC m=+1367.247812488" lastFinishedPulling="2026-02-27 17:17:04.003192396 +0000 UTC m=+1422.518989983" observedRunningTime="2026-02-27 17:17:05.434560648 +0000 UTC m=+1423.950358265" watchObservedRunningTime="2026-02-27 17:17:05.439343623 +0000 UTC m=+1423.955141210" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.442179 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-54c5f87dbb-t77v4"] Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.445116 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.452398 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-public-tls-certs\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.454075 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-credential-keys\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.458807 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-combined-ca-bundle\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.459360 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw4nv\" (UniqueName: \"kubernetes.io/projected/8d31c043-7a1b-4030-aa89-ccf8a23a766b-kube-api-access-dw4nv\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.467674 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54c5f87dbb-t77v4"] Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.469955 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-config-data\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.471759 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-ggwzp" podStartSLOduration=3.203310316 podStartE2EDuration="58.471741263s" podCreationTimestamp="2026-02-27 17:16:07 +0000 UTC" firstStartedPulling="2026-02-27 17:16:08.736236031 +0000 UTC m=+1367.252033618" lastFinishedPulling="2026-02-27 17:17:04.004666978 +0000 UTC m=+1422.520464565" observedRunningTime="2026-02-27 17:17:05.458670876 +0000 UTC m=+1423.974468463" watchObservedRunningTime="2026-02-27 17:17:05.471741263 +0000 UTC m=+1423.987538850" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.478890 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8d31c043-7a1b-4030-aa89-ccf8a23a766b-scripts\") pod \"keystone-597b655d8b-dmxbr\" (UID: \"8d31c043-7a1b-4030-aa89-ccf8a23a766b\") " pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.617500 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b872f276-2f96-401e-b918-f031b919338a-logs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.617822 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-scripts\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.617916 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5p4r\" (UniqueName: \"kubernetes.io/projected/b872f276-2f96-401e-b918-f031b919338a-kube-api-access-n5p4r\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.617961 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-internal-tls-certs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.618011 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-config-data\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.618047 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-public-tls-certs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.618422 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-combined-ca-bundle\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.722940 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-public-tls-certs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.723214 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-combined-ca-bundle\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.723251 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b872f276-2f96-401e-b918-f031b919338a-logs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.723275 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-scripts\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.723322 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5p4r\" (UniqueName: \"kubernetes.io/projected/b872f276-2f96-401e-b918-f031b919338a-kube-api-access-n5p4r\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.723355 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-internal-tls-certs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.723398 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-config-data\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.724951 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b872f276-2f96-401e-b918-f031b919338a-logs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.728264 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-config-data\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.728449 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-public-tls-certs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.730055 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-internal-tls-certs\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.730233 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-scripts\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.733527 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b872f276-2f96-401e-b918-f031b919338a-combined-ca-bundle\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.742798 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5p4r\" (UniqueName: \"kubernetes.io/projected/b872f276-2f96-401e-b918-f031b919338a-kube-api-access-n5p4r\") pod \"placement-54c5f87dbb-t77v4\" (UID: \"b872f276-2f96-401e-b918-f031b919338a\") " pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.753374 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:05 crc kubenswrapper[4708]: I0227 17:17:05.778807 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.020271 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-6d5895b968-p7cts"] Feb 27 17:17:06 crc kubenswrapper[4708]: W0227 17:17:06.151098 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5177dfe3_b55f_4a39_9a6b_392796ed3084.slice/crio-45a28912731fa3c96e68cdafdb3d30761fb3853615a89e95830969e5cafc7415 WatchSource:0}: Error finding container 45a28912731fa3c96e68cdafdb3d30761fb3853615a89e95830969e5cafc7415: Status 404 returned error can't find the container with id 45a28912731fa3c96e68cdafdb3d30761fb3853615a89e95830969e5cafc7415 Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.254302 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60cc75e4-619b-4cb8-a663-3214b22f2b43" path="/var/lib/kubelet/pods/60cc75e4-619b-4cb8-a663-3214b22f2b43/volumes" Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.449296 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134","Type":"ContainerStarted","Data":"fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c"} Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.449578 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134","Type":"ContainerStarted","Data":"3a318d4a0fb2276f84e28162129a3d0e7b994590a79831e98b0237edc4b523bc"} Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.457234 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d5895b968-p7cts" event={"ID":"5177dfe3-b55f-4a39-9a6b-392796ed3084","Type":"ContainerStarted","Data":"45a28912731fa3c96e68cdafdb3d30761fb3853615a89e95830969e5cafc7415"} Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.464978 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6d082cd-70c3-4ee1-9675-294347882c7d","Type":"ContainerStarted","Data":"24738811b9ec3e9321ef8fc2690e4119c5c5b9e5efa38ce2493c447cbc025390"} Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.498269 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=13.498254214 podStartE2EDuration="13.498254214s" podCreationTimestamp="2026-02-27 17:16:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:06.494997122 +0000 UTC m=+1425.010794709" watchObservedRunningTime="2026-02-27 17:17:06.498254214 +0000 UTC m=+1425.014051801" Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.557797 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-597b655d8b-dmxbr"] Feb 27 17:17:06 crc kubenswrapper[4708]: I0227 17:17:06.582283 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-54c5f87dbb-t77v4"] Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.475620 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54c5f87dbb-t77v4" event={"ID":"b872f276-2f96-401e-b918-f031b919338a","Type":"ContainerStarted","Data":"303a49772113b30edc1ae3cdbf1d0322340d676cb75c41b46cf6a405dcf7a644"} Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.476098 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54c5f87dbb-t77v4" event={"ID":"b872f276-2f96-401e-b918-f031b919338a","Type":"ContainerStarted","Data":"07c51cf96d4838dcf9b2776c87921d40daa579505e804720e9d5d32e466f71c1"} Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.477768 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134","Type":"ContainerStarted","Data":"fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0"} Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.488595 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d5895b968-p7cts" event={"ID":"5177dfe3-b55f-4a39-9a6b-392796ed3084","Type":"ContainerStarted","Data":"a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec"} Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.488635 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d5895b968-p7cts" event={"ID":"5177dfe3-b55f-4a39-9a6b-392796ed3084","Type":"ContainerStarted","Data":"34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800"} Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.488932 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.488948 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.490485 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-597b655d8b-dmxbr" event={"ID":"8d31c043-7a1b-4030-aa89-ccf8a23a766b","Type":"ContainerStarted","Data":"a9d640423f1333a10fef491b04608de0b5892308db15752d4c4125c14510fb37"} Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.490509 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-597b655d8b-dmxbr" event={"ID":"8d31c043-7a1b-4030-aa89-ccf8a23a766b","Type":"ContainerStarted","Data":"d795e6f3d085c50a640e04ef3ce68c1876e4abd2f31cbb453e777e641d057256"} Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.490536 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.512147 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=16.512126969 podStartE2EDuration="16.512126969s" podCreationTimestamp="2026-02-27 17:16:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:07.509900266 +0000 UTC m=+1426.025697853" watchObservedRunningTime="2026-02-27 17:17:07.512126969 +0000 UTC m=+1426.027924556" Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.538995 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-597b655d8b-dmxbr" podStartSLOduration=2.538978823 podStartE2EDuration="2.538978823s" podCreationTimestamp="2026-02-27 17:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:07.529805075 +0000 UTC m=+1426.045602662" watchObservedRunningTime="2026-02-27 17:17:07.538978823 +0000 UTC m=+1426.054776410" Feb 27 17:17:07 crc kubenswrapper[4708]: I0227 17:17:07.553054 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-6d5895b968-p7cts" podStartSLOduration=3.553037688 podStartE2EDuration="3.553037688s" podCreationTimestamp="2026-02-27 17:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:07.54812083 +0000 UTC m=+1426.063918417" watchObservedRunningTime="2026-02-27 17:17:07.553037688 +0000 UTC m=+1426.068835275" Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.505449 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"4287817a58c18e5e21b54f3da7d699ebaac545be0296cbecf07ce30c04d4d0d4"} Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.506035 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"973819a0785927f6b1bef81d1c52f9a9e78fb3b80f1303427ebab228f94c918b"} Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.506049 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"150b3293236b2889fcc9bb585cd8df1c5e4c507ad5da86622a89c912088a50de"} Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.506060 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"4a9821642d2f77f8a2d036dfc7077d8ae982f2380dee539364d0db0cf4ef98a5"} Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.508430 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-54c5f87dbb-t77v4" event={"ID":"b872f276-2f96-401e-b918-f031b919338a","Type":"ContainerStarted","Data":"0242cc996ca89d57f34e5ce0a773fb3b1fffab493a5c69f02398d0564d6b3c6f"} Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.509285 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.509327 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.510470 4708 generic.go:334] "Generic (PLEG): container finished" podID="dd272ccd-a2cc-433f-80bf-96134126ce6b" containerID="50ac60033f97c37889971727e8a28f27504bd0050cd60d78aaf4f010b9c23ef4" exitCode=0 Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.510550 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggwzp" event={"ID":"dd272ccd-a2cc-433f-80bf-96134126ce6b","Type":"ContainerDied","Data":"50ac60033f97c37889971727e8a28f27504bd0050cd60d78aaf4f010b9c23ef4"} Feb 27 17:17:08 crc kubenswrapper[4708]: I0227 17:17:08.529998 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-54c5f87dbb-t77v4" podStartSLOduration=3.529982427 podStartE2EDuration="3.529982427s" podCreationTimestamp="2026-02-27 17:17:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:08.527392684 +0000 UTC m=+1427.043190271" watchObservedRunningTime="2026-02-27 17:17:08.529982427 +0000 UTC m=+1427.045780014" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.530036 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"7bcf0a0a5625bd5c87e1426ce92ea51b3b833b09c269d70349bf7c5957350b4a"} Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.530302 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"ca038d7bdeac02fe62bf868547574924cb27429a485ee4f6118f3ca0d1a7ee43"} Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.530314 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"e8a41f59-1fee-425c-a42a-de40caa66c0f","Type":"ContainerStarted","Data":"00e4b41d47860f27a298fd917332fcca2837905d0ebad5a260a6d8c9bf84a6f7"} Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.564709 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=68.868105684 podStartE2EDuration="2m7.564692378s" podCreationTimestamp="2026-02-27 17:15:02 +0000 UTC" firstStartedPulling="2026-02-27 17:16:08.977735292 +0000 UTC m=+1367.493532879" lastFinishedPulling="2026-02-27 17:17:07.674321986 +0000 UTC m=+1426.190119573" observedRunningTime="2026-02-27 17:17:09.561323153 +0000 UTC m=+1428.077120740" watchObservedRunningTime="2026-02-27 17:17:09.564692378 +0000 UTC m=+1428.080489965" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.839423 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5m4lj"] Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.841185 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.844170 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.851705 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5m4lj"] Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.922251 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.922344 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.922371 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.922444 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4ln5\" (UniqueName: \"kubernetes.io/projected/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-kube-api-access-w4ln5\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.922499 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-config\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:09 crc kubenswrapper[4708]: I0227 17:17:09.922675 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.024749 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.024829 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4ln5\" (UniqueName: \"kubernetes.io/projected/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-kube-api-access-w4ln5\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.024883 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-config\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.024973 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.025021 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.025044 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.025644 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-swift-storage-0\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.025863 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-sb\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.026285 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-config\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.026748 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-nb\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.027076 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-svc\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.048524 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4ln5\" (UniqueName: \"kubernetes.io/projected/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-kube-api-access-w4ln5\") pod \"dnsmasq-dns-55f844cf75-5m4lj\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:10 crc kubenswrapper[4708]: I0227 17:17:10.175702 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.382743 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.383065 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.421153 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.446250 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.555006 4708 generic.go:334] "Generic (PLEG): container finished" podID="76e1fee2-5549-44d4-aaab-c70ad0fb083e" containerID="88fec4da7b80600e36ed3573e1898c2c90c1850824d336ef91df8763e551a0db" exitCode=0 Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.555106 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lhfzc" event={"ID":"76e1fee2-5549-44d4-aaab-c70ad0fb083e","Type":"ContainerDied","Data":"88fec4da7b80600e36ed3573e1898c2c90c1850824d336ef91df8763e551a0db"} Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.555370 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:17:11 crc kubenswrapper[4708]: I0227 17:17:11.555409 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.572347 4708 generic.go:334] "Generic (PLEG): container finished" podID="57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" containerID="ae9b64a7309db4fedfe9919e36d91908e6101b9c6814fb46d8e7a3371b045372" exitCode=0 Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.572442 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s4ckm" event={"ID":"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5","Type":"ContainerDied","Data":"ae9b64a7309db4fedfe9919e36d91908e6101b9c6814fb46d8e7a3371b045372"} Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.574670 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-ggwzp" event={"ID":"dd272ccd-a2cc-433f-80bf-96134126ce6b","Type":"ContainerDied","Data":"baa4bdf8ec6ee02a2dcece05a2064e820b9e5fa345cc5d5683f82c96531049c8"} Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.574737 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="baa4bdf8ec6ee02a2dcece05a2064e820b9e5fa345cc5d5683f82c96531049c8" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.583055 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.680651 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-combined-ca-bundle\") pod \"dd272ccd-a2cc-433f-80bf-96134126ce6b\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.680735 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v9xr6\" (UniqueName: \"kubernetes.io/projected/dd272ccd-a2cc-433f-80bf-96134126ce6b-kube-api-access-v9xr6\") pod \"dd272ccd-a2cc-433f-80bf-96134126ce6b\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.680891 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-db-sync-config-data\") pod \"dd272ccd-a2cc-433f-80bf-96134126ce6b\" (UID: \"dd272ccd-a2cc-433f-80bf-96134126ce6b\") " Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.686128 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd272ccd-a2cc-433f-80bf-96134126ce6b-kube-api-access-v9xr6" (OuterVolumeSpecName: "kube-api-access-v9xr6") pod "dd272ccd-a2cc-433f-80bf-96134126ce6b" (UID: "dd272ccd-a2cc-433f-80bf-96134126ce6b"). InnerVolumeSpecName "kube-api-access-v9xr6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.690974 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dd272ccd-a2cc-433f-80bf-96134126ce6b" (UID: "dd272ccd-a2cc-433f-80bf-96134126ce6b"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.746089 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dd272ccd-a2cc-433f-80bf-96134126ce6b" (UID: "dd272ccd-a2cc-433f-80bf-96134126ce6b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.785087 4708 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.785117 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd272ccd-a2cc-433f-80bf-96134126ce6b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:12 crc kubenswrapper[4708]: I0227 17:17:12.785127 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v9xr6\" (UniqueName: \"kubernetes.io/projected/dd272ccd-a2cc-433f-80bf-96134126ce6b-kube-api-access-v9xr6\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.547986 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.549954 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.585675 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-ggwzp" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.601077 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.604611 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.624916 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.856654 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.856738 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.873278 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.930965 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-777c49d4fd-pzrvc"] Feb 27 17:17:13 crc kubenswrapper[4708]: E0227 17:17:13.931354 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd272ccd-a2cc-433f-80bf-96134126ce6b" containerName="barbican-db-sync" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.931367 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd272ccd-a2cc-433f-80bf-96134126ce6b" containerName="barbican-db-sync" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.931555 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd272ccd-a2cc-433f-80bf-96134126ce6b" containerName="barbican-db-sync" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.932698 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.943592 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.943787 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.943918 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-smdlt" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.947012 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5b688b6d95-78fb7"] Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.949074 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.951109 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Feb 27 17:17:13 crc kubenswrapper[4708]: I0227 17:17:13.986343 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-777c49d4fd-pzrvc"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.011257 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b688b6d95-78fb7"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.093808 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5m4lj"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.107569 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nq55v"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.109237 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.121313 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nq55v"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129198 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/317368d9-8188-4337-9a05-e504c8e90b84-logs\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129241 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-config-data\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129292 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-combined-ca-bundle\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129339 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-config-data-custom\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129370 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nxlpl\" (UniqueName: \"kubernetes.io/projected/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-kube-api-access-nxlpl\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129388 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-config-data-custom\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129403 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-logs\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129454 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-config-data\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129488 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m4v2\" (UniqueName: \"kubernetes.io/projected/317368d9-8188-4337-9a05-e504c8e90b84-kube-api-access-6m4v2\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.129506 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-combined-ca-bundle\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.151223 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-594bc68494-cmml7"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.153629 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.156559 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.204601 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-594bc68494-cmml7"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.221169 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-556cb97757-rbj2s"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.221443 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-556cb97757-rbj2s" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-api" containerID="cri-o://d55a7a08666fab43e70b497c7e6ef9b5949f9e5559045907eb71973edbc42ae8" gracePeriod=30 Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.221897 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-556cb97757-rbj2s" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-httpd" containerID="cri-o://ce682dc09e4d5f957a18d41066317899c0c870b411c993a2c48812f3a73ea7e1" gracePeriod=30 Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234487 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-config-data\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234561 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-combined-ca-bundle\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234603 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234621 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppnt9\" (UniqueName: \"kubernetes.io/projected/83cd2564-348e-472f-b75c-10ccf48a876b-kube-api-access-ppnt9\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234638 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234657 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8544df-a61e-464b-bc9e-9a68908322c8-logs\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234679 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-config-data-custom\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234708 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234728 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nxlpl\" (UniqueName: \"kubernetes.io/projected/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-kube-api-access-nxlpl\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234748 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-config-data-custom\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234765 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-logs\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234790 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data-custom\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234811 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234829 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbrhw\" (UniqueName: \"kubernetes.io/projected/1d8544df-a61e-464b-bc9e-9a68908322c8-kube-api-access-tbrhw\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234881 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234898 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-config-data\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234929 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-config\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234948 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m4v2\" (UniqueName: \"kubernetes.io/projected/317368d9-8188-4337-9a05-e504c8e90b84-kube-api-access-6m4v2\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234967 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-combined-ca-bundle\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.234995 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-combined-ca-bundle\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.235014 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/317368d9-8188-4337-9a05-e504c8e90b84-logs\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.235394 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/317368d9-8188-4337-9a05-e504c8e90b84-logs\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.235756 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-logs\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.272679 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-config-data-custom\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.273712 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-combined-ca-bundle\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.275493 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-config-data\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.276813 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-config-data-custom\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.281328 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-config-data\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.283904 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/317368d9-8188-4337-9a05-e504c8e90b84-combined-ca-bundle\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.310011 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-556cb97757-rbj2s" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.176:9696/\": read tcp 10.217.0.2:41974->10.217.0.176:9696: read: connection reset by peer" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.348933 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tbrhw\" (UniqueName: \"kubernetes.io/projected/1d8544df-a61e-464b-bc9e-9a68908322c8-kube-api-access-tbrhw\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.349054 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.349138 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-config\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.350530 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-swift-storage-0\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.351031 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-combined-ca-bundle\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.351354 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nxlpl\" (UniqueName: \"kubernetes.io/projected/dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd-kube-api-access-nxlpl\") pod \"barbican-keystone-listener-777c49d4fd-pzrvc\" (UID: \"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd\") " pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.354537 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m4v2\" (UniqueName: \"kubernetes.io/projected/317368d9-8188-4337-9a05-e504c8e90b84-kube-api-access-6m4v2\") pod \"barbican-worker-5b688b6d95-78fb7\" (UID: \"317368d9-8188-4337-9a05-e504c8e90b84\") " pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.355072 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.355118 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppnt9\" (UniqueName: \"kubernetes.io/projected/83cd2564-348e-472f-b75c-10ccf48a876b-kube-api-access-ppnt9\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.355838 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-config\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.355139 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.355997 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8544df-a61e-464b-bc9e-9a68908322c8-logs\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.356596 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-sb\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.361617 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.362501 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-svc\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.362737 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data-custom\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.362788 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.362181 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8544df-a61e-464b-bc9e-9a68908322c8-logs\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.363351 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-nb\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.363482 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.371565 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-combined-ca-bundle\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.374840 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppnt9\" (UniqueName: \"kubernetes.io/projected/83cd2564-348e-472f-b75c-10ccf48a876b-kube-api-access-ppnt9\") pod \"dnsmasq-dns-85ff748b95-nq55v\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.375347 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data-custom\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.387300 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbrhw\" (UniqueName: \"kubernetes.io/projected/1d8544df-a61e-464b-bc9e-9a68908322c8-kube-api-access-tbrhw\") pod \"barbican-api-594bc68494-cmml7\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.389080 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-547f9bd6cc-98rqm"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.390828 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.398512 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-547f9bd6cc-98rqm"] Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.440807 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.469072 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-combined-ca-bundle\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.469110 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-config\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.469145 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-ovndb-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.469183 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-internal-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.469209 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-httpd-config\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.469252 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcvq4\" (UniqueName: \"kubernetes.io/projected/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-kube-api-access-wcvq4\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.469285 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-public-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.501711 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.570868 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-combined-ca-bundle\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.570907 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-config\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.570939 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-ovndb-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.570986 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-internal-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.571010 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-httpd-config\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.571052 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcvq4\" (UniqueName: \"kubernetes.io/projected/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-kube-api-access-wcvq4\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.571092 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-public-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.573901 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.578060 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-public-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.584641 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-config\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.586922 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5b688b6d95-78fb7" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.590729 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-combined-ca-bundle\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.591678 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-ovndb-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.596939 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-httpd-config\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.611504 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-internal-tls-certs\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.620649 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcvq4\" (UniqueName: \"kubernetes.io/projected/b9aa13d2-83ae-4a00-821d-97fc5592ec7e-kube-api-access-wcvq4\") pod \"neutron-547f9bd6cc-98rqm\" (UID: \"b9aa13d2-83ae-4a00-821d-97fc5592ec7e\") " pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.620715 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.621074 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:14 crc kubenswrapper[4708]: I0227 17:17:14.771775 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:15 crc kubenswrapper[4708]: I0227 17:17:15.642925 4708 generic.go:334] "Generic (PLEG): container finished" podID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerID="ce682dc09e4d5f957a18d41066317899c0c870b411c993a2c48812f3a73ea7e1" exitCode=0 Feb 27 17:17:15 crc kubenswrapper[4708]: I0227 17:17:15.644121 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-556cb97757-rbj2s" event={"ID":"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5","Type":"ContainerDied","Data":"ce682dc09e4d5f957a18d41066317899c0c870b411c993a2c48812f3a73ea7e1"} Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.553376 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.587165 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634412 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-config-data\") pod \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634445 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-config-data\") pod \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634590 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qv87w\" (UniqueName: \"kubernetes.io/projected/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-kube-api-access-qv87w\") pod \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634629 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-combined-ca-bundle\") pod \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634722 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-scripts\") pod \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634745 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-db-sync-config-data\") pod \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634764 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnhfv\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-kube-api-access-hnhfv\") pod \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634782 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-combined-ca-bundle\") pod \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.634801 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-etc-machine-id\") pod \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.635077 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-certs\") pod \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\" (UID: \"76e1fee2-5549-44d4-aaab-c70ad0fb083e\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.635120 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-scripts\") pod \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\" (UID: \"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5\") " Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.646014 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" (UID: "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.659556 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-lhfzc" event={"ID":"76e1fee2-5549-44d4-aaab-c70ad0fb083e","Type":"ContainerDied","Data":"2c8f24933bbef6410f1f12a9f07c14058d5dca33642d26bafaeb29eb7243b677"} Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.659795 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c8f24933bbef6410f1f12a9f07c14058d5dca33642d26bafaeb29eb7243b677" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.659868 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-lhfzc" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.659900 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" (UID: "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.661177 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.661208 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-s4ckm" event={"ID":"57f4cfb1-705b-40bb-b7aa-d722d1ec00c5","Type":"ContainerDied","Data":"15e110982efa8d48f8047d340a81a15741809b95dfcb8ff635da1c80e215f375"} Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.661230 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e110982efa8d48f8047d340a81a15741809b95dfcb8ff635da1c80e215f375" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.661216 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-s4ckm" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.661219 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.661772 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-kube-api-access-qv87w" (OuterVolumeSpecName: "kube-api-access-qv87w") pod "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" (UID: "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5"). InnerVolumeSpecName "kube-api-access-qv87w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.664652 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-scripts" (OuterVolumeSpecName: "scripts") pod "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" (UID: "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.678167 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-config-data" (OuterVolumeSpecName: "config-data") pod "76e1fee2-5549-44d4-aaab-c70ad0fb083e" (UID: "76e1fee2-5549-44d4-aaab-c70ad0fb083e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.678704 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-certs" (OuterVolumeSpecName: "certs") pod "76e1fee2-5549-44d4-aaab-c70ad0fb083e" (UID: "76e1fee2-5549-44d4-aaab-c70ad0fb083e"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.680208 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-kube-api-access-hnhfv" (OuterVolumeSpecName: "kube-api-access-hnhfv") pod "76e1fee2-5549-44d4-aaab-c70ad0fb083e" (UID: "76e1fee2-5549-44d4-aaab-c70ad0fb083e"). InnerVolumeSpecName "kube-api-access-hnhfv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.680465 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-scripts" (OuterVolumeSpecName: "scripts") pod "76e1fee2-5549-44d4-aaab-c70ad0fb083e" (UID: "76e1fee2-5549-44d4-aaab-c70ad0fb083e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: E0227 17:17:16.682999 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"ceilometer-central-agent\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\", failed to \"StartContainer\" for \"sg-core\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"]" pod="openstack/ceilometer-0" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.716293 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-556cb97757-rbj2s" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.176:9696/\": dial tcp 10.217.0.176:9696: connect: connection refused" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.729311 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "76e1fee2-5549-44d4-aaab-c70ad0fb083e" (UID: "76e1fee2-5549-44d4-aaab-c70ad0fb083e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.742831 4708 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743060 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnhfv\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-kube-api-access-hnhfv\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743124 4708 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743184 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/76e1fee2-5549-44d4-aaab-c70ad0fb083e-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743249 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743305 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743361 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qv87w\" (UniqueName: \"kubernetes.io/projected/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-kube-api-access-qv87w\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743416 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.743467 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/76e1fee2-5549-44d4-aaab-c70ad0fb083e-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.792024 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" (UID: "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.792950 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-config-data" (OuterVolumeSpecName: "config-data") pod "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" (UID: "57f4cfb1-705b-40bb-b7aa-d722d1ec00c5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.845633 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:16 crc kubenswrapper[4708]: I0227 17:17:16.845664 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.038250 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5m4lj"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.189485 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7c6cc57cfd-rj6nd"] Feb 27 17:17:17 crc kubenswrapper[4708]: E0227 17:17:17.190502 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76e1fee2-5549-44d4-aaab-c70ad0fb083e" containerName="cloudkitty-db-sync" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.190518 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="76e1fee2-5549-44d4-aaab-c70ad0fb083e" containerName="cloudkitty-db-sync" Feb 27 17:17:17 crc kubenswrapper[4708]: E0227 17:17:17.190555 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" containerName="cinder-db-sync" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.190563 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" containerName="cinder-db-sync" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.190885 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="76e1fee2-5549-44d4-aaab-c70ad0fb083e" containerName="cloudkitty-db-sync" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.190925 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" containerName="cinder-db-sync" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.207116 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.214875 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6cc57cfd-rj6nd"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.219239 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.220178 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.382798 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-config-data\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.382865 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-internal-tls-certs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.382959 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-combined-ca-bundle\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.383000 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpmt9\" (UniqueName: \"kubernetes.io/projected/46ded50a-aa4c-47e7-8768-82bb22fff933-kube-api-access-kpmt9\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.383074 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-config-data-custom\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.383121 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-public-tls-certs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.383158 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ded50a-aa4c-47e7-8768-82bb22fff933-logs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.418489 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-777c49d4fd-pzrvc"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.440857 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nq55v"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.466211 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5b688b6d95-78fb7"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.483362 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.487591 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-public-tls-certs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.487660 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ded50a-aa4c-47e7-8768-82bb22fff933-logs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.490649 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46ded50a-aa4c-47e7-8768-82bb22fff933-logs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.491752 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-config-data\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.491780 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-internal-tls-certs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.491959 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-combined-ca-bundle\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.492025 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpmt9\" (UniqueName: \"kubernetes.io/projected/46ded50a-aa4c-47e7-8768-82bb22fff933-kube-api-access-kpmt9\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.492151 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-config-data-custom\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: W0227 17:17:17.501608 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83cd2564_348e_472f_b75c_10ccf48a876b.slice/crio-6cda9fe9560f9f89a0b297b04cbf6afb108135671033bfdd975a4ac3ebe27b15 WatchSource:0}: Error finding container 6cda9fe9560f9f89a0b297b04cbf6afb108135671033bfdd975a4ac3ebe27b15: Status 404 returned error can't find the container with id 6cda9fe9560f9f89a0b297b04cbf6afb108135671033bfdd975a4ac3ebe27b15 Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.503765 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-public-tls-certs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.504783 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-internal-tls-certs\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.505827 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-config-data\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.509471 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-config-data-custom\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.513525 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/46ded50a-aa4c-47e7-8768-82bb22fff933-combined-ca-bundle\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.514135 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpmt9\" (UniqueName: \"kubernetes.io/projected/46ded50a-aa4c-47e7-8768-82bb22fff933-kube-api-access-kpmt9\") pod \"barbican-api-7c6cc57cfd-rj6nd\" (UID: \"46ded50a-aa4c-47e7-8768-82bb22fff933\") " pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.532009 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-594bc68494-cmml7"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.534118 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.579117 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-547f9bd6cc-98rqm"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.683456 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d047b4cb-8a38-4b0b-b667-0b78aeb2a166","Type":"ContainerStarted","Data":"6f45e887963e9c65f2005b7d12772669abeb9319813f012ecbadf5192b76c7a3"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.683883 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="ceilometer-notification-agent" containerID="cri-o://128b26bd67d7107d59156662763792b8fd1281bd074fbdebafa8650b6a50ce0f" gracePeriod=30 Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.683966 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.684279 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="proxy-httpd" containerID="cri-o://6f45e887963e9c65f2005b7d12772669abeb9319813f012ecbadf5192b76c7a3" gracePeriod=30 Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.688773 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-547f9bd6cc-98rqm" event={"ID":"b9aa13d2-83ae-4a00-821d-97fc5592ec7e","Type":"ContainerStarted","Data":"03daeac9a566cd3e89e45e1c9bc9964dfcac42929771ba18ca5b57ccc5436608"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.696096 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-594bc68494-cmml7" event={"ID":"1d8544df-a61e-464b-bc9e-9a68908322c8","Type":"ContainerStarted","Data":"a9903b0b99bcbaa67241af05bc3c9dcb57a88bd93ecead44eb23e2b3fab5d1b6"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.698225 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" event={"ID":"80da6b16-2bf9-4528-86cc-a5e9a4e0187a","Type":"ContainerStarted","Data":"7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.698267 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" event={"ID":"80da6b16-2bf9-4528-86cc-a5e9a4e0187a","Type":"ContainerStarted","Data":"737c0108678c05d4786b8bb43efac8706a83f292f173bcda49a3c9f9c4483f57"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.699360 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b688b6d95-78fb7" event={"ID":"317368d9-8188-4337-9a05-e504c8e90b84","Type":"ContainerStarted","Data":"7706cd7d99ced4bda01df9dfaac4f5e092972e62f8866411802f792ba842fd7b"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.700090 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" event={"ID":"83cd2564-348e-472f-b75c-10ccf48a876b","Type":"ContainerStarted","Data":"6cda9fe9560f9f89a0b297b04cbf6afb108135671033bfdd975a4ac3ebe27b15"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.701238 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.701940 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" event={"ID":"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd","Type":"ContainerStarted","Data":"3fde882684b3c2be75276925ba2e66bf82595dbd502491d25079120694708de6"} Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.787917 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-5p4n2"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.789495 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.792324 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.793130 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2sp9f" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.793403 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.793543 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.793728 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.808715 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-5p4n2"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.908814 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-scripts\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.910265 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-config-data\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.910338 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-combined-ca-bundle\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.910387 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-certs\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.910472 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzfps\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-kube-api-access-dzfps\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.916855 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.918528 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.920082 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.926637 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.926982 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.927096 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.927191 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-hrv76" Feb 27 17:17:17 crc kubenswrapper[4708]: I0227 17:17:17.960946 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.012902 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.012961 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/102af832-14be-4626-9549-7e6fdd8abe4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013003 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013046 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013094 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5dw7\" (UniqueName: \"kubernetes.io/projected/102af832-14be-4626-9549-7e6fdd8abe4f-kube-api-access-c5dw7\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013125 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-scripts\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013157 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-config-data\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013183 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-combined-ca-bundle\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013205 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-certs\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013246 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dzfps\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-kube-api-access-dzfps\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.013289 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.021875 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-config-data\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.023749 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-certs\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.024359 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-scripts\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.025497 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-combined-ca-bundle\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.058407 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzfps\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-kube-api-access-dzfps\") pod \"cloudkitty-storageinit-5p4n2\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.085725 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nq55v"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.116466 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5dw7\" (UniqueName: \"kubernetes.io/projected/102af832-14be-4626-9549-7e6fdd8abe4f-kube-api-access-c5dw7\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.116610 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.116634 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.116661 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/102af832-14be-4626-9549-7e6fdd8abe4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.116692 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.116730 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.121429 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/102af832-14be-4626-9549-7e6fdd8abe4f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.125150 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-scripts\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.128379 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.128952 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jdbxl"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.131689 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.132686 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.143316 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.143945 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jdbxl"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.159306 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5dw7\" (UniqueName: \"kubernetes.io/projected/102af832-14be-4626-9549-7e6fdd8abe4f-kube-api-access-c5dw7\") pod \"cinder-scheduler-0\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.197291 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.226902 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.228966 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.232423 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.277645 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.295796 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329342 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-config\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329600 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329663 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329695 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329720 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329743 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8z86\" (UniqueName: \"kubernetes.io/projected/e1dda127-a3f5-474b-b992-a26590e8507b-kube-api-access-f8z86\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329768 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-scripts\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329798 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe2a72f-1849-4e9a-a275-8d92879371e8-logs\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.329816 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.333507 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l24cv\" (UniqueName: \"kubernetes.io/projected/3fe2a72f-1849-4e9a-a275-8d92879371e8-kube-api-access-l24cv\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.333579 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.333633 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fe2a72f-1849-4e9a-a275-8d92879371e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.333676 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436100 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436175 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436199 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436230 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8z86\" (UniqueName: \"kubernetes.io/projected/e1dda127-a3f5-474b-b992-a26590e8507b-kube-api-access-f8z86\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436269 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-scripts\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436287 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe2a72f-1849-4e9a-a275-8d92879371e8-logs\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436304 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436343 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l24cv\" (UniqueName: \"kubernetes.io/projected/3fe2a72f-1849-4e9a-a275-8d92879371e8-kube-api-access-l24cv\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436418 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436454 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fe2a72f-1849-4e9a-a275-8d92879371e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436503 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436551 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-config\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.436583 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.437836 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-nb\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.438493 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-swift-storage-0\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.439144 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-svc\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.441316 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-sb\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.441354 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fe2a72f-1849-4e9a-a275-8d92879371e8-etc-machine-id\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.442239 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-config\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.445942 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe2a72f-1849-4e9a-a275-8d92879371e8-logs\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.462863 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data-custom\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.469360 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8z86\" (UniqueName: \"kubernetes.io/projected/e1dda127-a3f5-474b-b992-a26590e8507b-kube-api-access-f8z86\") pod \"dnsmasq-dns-5c9776ccc5-jdbxl\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.470358 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l24cv\" (UniqueName: \"kubernetes.io/projected/3fe2a72f-1849-4e9a-a275-8d92879371e8-kube-api-access-l24cv\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.479030 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.479315 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-scripts\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.481347 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.538520 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.561695 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.580235 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7c6cc57cfd-rj6nd"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.595501 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.741586 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-nb\") pod \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.741951 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4ln5\" (UniqueName: \"kubernetes.io/projected/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-kube-api-access-w4ln5\") pod \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.741994 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-swift-storage-0\") pod \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.742022 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-sb\") pod \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.742111 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-svc\") pod \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.742131 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-config\") pod \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\" (UID: \"80da6b16-2bf9-4528-86cc-a5e9a4e0187a\") " Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.744287 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" event={"ID":"46ded50a-aa4c-47e7-8768-82bb22fff933","Type":"ContainerStarted","Data":"c8610fb052a72cea0847d823fc633aefaf49715d63ac3b8d49b38503b75f8437"} Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.754962 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-kube-api-access-w4ln5" (OuterVolumeSpecName: "kube-api-access-w4ln5") pod "80da6b16-2bf9-4528-86cc-a5e9a4e0187a" (UID: "80da6b16-2bf9-4528-86cc-a5e9a4e0187a"). InnerVolumeSpecName "kube-api-access-w4ln5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.759040 4708 generic.go:334] "Generic (PLEG): container finished" podID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerID="6f45e887963e9c65f2005b7d12772669abeb9319813f012ecbadf5192b76c7a3" exitCode=0 Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.759097 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d047b4cb-8a38-4b0b-b667-0b78aeb2a166","Type":"ContainerDied","Data":"6f45e887963e9c65f2005b7d12772669abeb9319813f012ecbadf5192b76c7a3"} Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.777994 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-547f9bd6cc-98rqm" event={"ID":"b9aa13d2-83ae-4a00-821d-97fc5592ec7e","Type":"ContainerStarted","Data":"a4d3d38cf3b92ffdf79b1b633846f7b2feb35420b0e377ee65442357bcb261e2"} Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.812047 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-594bc68494-cmml7" event={"ID":"1d8544df-a61e-464b-bc9e-9a68908322c8","Type":"ContainerStarted","Data":"46fc9d23eae1c0a82d436083bb5fdbed5d47c370b34f2bc54b95098ee4666e0e"} Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.813634 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.813648 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.819816 4708 generic.go:334] "Generic (PLEG): container finished" podID="80da6b16-2bf9-4528-86cc-a5e9a4e0187a" containerID="7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af" exitCode=0 Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.820017 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" event={"ID":"80da6b16-2bf9-4528-86cc-a5e9a4e0187a","Type":"ContainerDied","Data":"7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af"} Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.820049 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" event={"ID":"80da6b16-2bf9-4528-86cc-a5e9a4e0187a","Type":"ContainerDied","Data":"737c0108678c05d4786b8bb43efac8706a83f292f173bcda49a3c9f9c4483f57"} Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.820068 4708 scope.go:117] "RemoveContainer" containerID="7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.820199 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-55f844cf75-5m4lj" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.839828 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-594bc68494-cmml7" podStartSLOduration=4.83980602 podStartE2EDuration="4.83980602s" podCreationTimestamp="2026-02-27 17:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:18.828121552 +0000 UTC m=+1437.343919139" watchObservedRunningTime="2026-02-27 17:17:18.83980602 +0000 UTC m=+1437.355603607" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.840823 4708 generic.go:334] "Generic (PLEG): container finished" podID="83cd2564-348e-472f-b75c-10ccf48a876b" containerID="148d3c6761522cd028862b8880bf4dd7a822fcc83d7c2049bf549a44d7440b8a" exitCode=0 Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.841045 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" event={"ID":"83cd2564-348e-472f-b75c-10ccf48a876b","Type":"ContainerDied","Data":"148d3c6761522cd028862b8880bf4dd7a822fcc83d7c2049bf549a44d7440b8a"} Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.845128 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4ln5\" (UniqueName: \"kubernetes.io/projected/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-kube-api-access-w4ln5\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.905040 4708 scope.go:117] "RemoveContainer" containerID="7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af" Feb 27 17:17:18 crc kubenswrapper[4708]: E0227 17:17:18.906119 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af\": container with ID starting with 7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af not found: ID does not exist" containerID="7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.906153 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af"} err="failed to get container status \"7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af\": rpc error: code = NotFound desc = could not find container \"7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af\": container with ID starting with 7a50f281d2328b1ac7771c055187b524e9070d319c2864d8e81922b4081ff9af not found: ID does not exist" Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.938803 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-5p4n2"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.948997 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:18 crc kubenswrapper[4708]: I0227 17:17:18.996875 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-config" (OuterVolumeSpecName: "config") pod "80da6b16-2bf9-4528-86cc-a5e9a4e0187a" (UID: "80da6b16-2bf9-4528-86cc-a5e9a4e0187a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.053201 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.086624 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "80da6b16-2bf9-4528-86cc-a5e9a4e0187a" (UID: "80da6b16-2bf9-4528-86cc-a5e9a4e0187a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.145864 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "80da6b16-2bf9-4528-86cc-a5e9a4e0187a" (UID: "80da6b16-2bf9-4528-86cc-a5e9a4e0187a"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.156146 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.156179 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.217977 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jdbxl"] Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.232301 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "80da6b16-2bf9-4528-86cc-a5e9a4e0187a" (UID: "80da6b16-2bf9-4528-86cc-a5e9a4e0187a"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.257800 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:19 crc kubenswrapper[4708]: W0227 17:17:19.268191 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode1dda127_a3f5_474b_b992_a26590e8507b.slice/crio-a04eba9c32c9f9f1e5b06e509a986bbdccd39b7b678b1e67a2d1509f03ceff30 WatchSource:0}: Error finding container a04eba9c32c9f9f1e5b06e509a986bbdccd39b7b678b1e67a2d1509f03ceff30: Status 404 returned error can't find the container with id a04eba9c32c9f9f1e5b06e509a986bbdccd39b7b678b1e67a2d1509f03ceff30 Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.277893 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "80da6b16-2bf9-4528-86cc-a5e9a4e0187a" (UID: "80da6b16-2bf9-4528-86cc-a5e9a4e0187a"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.354995 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.363253 4708 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/80da6b16-2bf9-4528-86cc-a5e9a4e0187a-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.632760 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5m4lj"] Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.662518 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-55f844cf75-5m4lj"] Feb 27 17:17:19 crc kubenswrapper[4708]: E0227 17:17:19.702890 4708 log.go:32] "CreateContainer in sandbox from runtime service failed" err=< Feb 27 17:17:19 crc kubenswrapper[4708]: rpc error: code = Unknown desc = container create failed: mount `/var/lib/kubelet/pods/83cd2564-348e-472f-b75c-10ccf48a876b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 27 17:17:19 crc kubenswrapper[4708]: > podSandboxID="6cda9fe9560f9f89a0b297b04cbf6afb108135671033bfdd975a4ac3ebe27b15" Feb 27 17:17:19 crc kubenswrapper[4708]: E0227 17:17:19.703006 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:17:19 crc kubenswrapper[4708]: container &Container{Name:dnsmasq-dns,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n7ch57ch5c5hcch589hf7h577h659h96h5c8h5b4h55fhbbh667h565h5bchcbh58dh7dh5bch586h56ch574h598h67dh5c8h56dh8bh574h564hbch7q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-swift-storage-0,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-swift-storage-0,SubPath:dns-swift-storage-0,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-nb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-nb,SubPath:ovsdbserver-nb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovsdbserver-sb,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/ovsdbserver-sb,SubPath:ovsdbserver-sb,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ppnt9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:5,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:nil,TCPSocket:&TCPSocketAction{Port:{0 5353 },Host:,},GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-85ff748b95-nq55v_openstack(83cd2564-348e-472f-b75c-10ccf48a876b): CreateContainerError: container create failed: mount `/var/lib/kubelet/pods/83cd2564-348e-472f-b75c-10ccf48a876b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory Feb 27 17:17:19 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:17:19 crc kubenswrapper[4708]: E0227 17:17:19.704214 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dnsmasq-dns\" with CreateContainerError: \"container create failed: mount `/var/lib/kubelet/pods/83cd2564-348e-472f-b75c-10ccf48a876b/volume-subpaths/dns-svc/dnsmasq-dns/1` to `etc/dnsmasq.d/hosts/dns-svc`: No such file or directory\\n\"" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" podUID="83cd2564-348e-472f-b75c-10ccf48a876b" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.870067 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fe2a72f-1849-4e9a-a275-8d92879371e8","Type":"ContainerStarted","Data":"7dcb9f727e098495a21394b974984c05f0f871e027cccb51215e829c26dc731d"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.872344 4708 generic.go:334] "Generic (PLEG): container finished" podID="e1dda127-a3f5-474b-b992-a26590e8507b" containerID="fe1a156f7e537f23b2251dfe5e120498e46bab142b6130979acef0599c810c31" exitCode=0 Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.872414 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" event={"ID":"e1dda127-a3f5-474b-b992-a26590e8507b","Type":"ContainerDied","Data":"fe1a156f7e537f23b2251dfe5e120498e46bab142b6130979acef0599c810c31"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.872443 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" event={"ID":"e1dda127-a3f5-474b-b992-a26590e8507b","Type":"ContainerStarted","Data":"a04eba9c32c9f9f1e5b06e509a986bbdccd39b7b678b1e67a2d1509f03ceff30"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.876160 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-547f9bd6cc-98rqm" event={"ID":"b9aa13d2-83ae-4a00-821d-97fc5592ec7e","Type":"ContainerStarted","Data":"1002e3821307ddc1ea3082cafc33e011607786e25a1bc03866779c635be3f69f"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.876605 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.877542 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"102af832-14be-4626-9549-7e6fdd8abe4f","Type":"ContainerStarted","Data":"324b33b6905c6f30863fa933bff76cbee73b4c4203bc2796c83e01660178109d"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.879150 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-594bc68494-cmml7" event={"ID":"1d8544df-a61e-464b-bc9e-9a68908322c8","Type":"ContainerStarted","Data":"c8eb35e2b9a4b1db1b402a991bbdf2cdfe102f9f6a5e195b8a17f8047fa73f76"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.883124 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" event={"ID":"46ded50a-aa4c-47e7-8768-82bb22fff933","Type":"ContainerStarted","Data":"ddcf45c9e930081638a5e6b2357993bac6cdee95b2e120542cbfa40bc0720b56"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.898198 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5p4n2" event={"ID":"5375b346-0435-45a2-bc67-f966299a9f4f","Type":"ContainerStarted","Data":"c0826cfc80041253dde32adc62c0129e47e3f9f59f58071e7a30056235d0f416"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.898234 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5p4n2" event={"ID":"5375b346-0435-45a2-bc67-f966299a9f4f","Type":"ContainerStarted","Data":"6cd02f40f7eb3045b349839ba70bf67ff2952bab3f34721df035a32d31860532"} Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.913725 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-547f9bd6cc-98rqm" podStartSLOduration=5.9137093929999995 podStartE2EDuration="5.913709393s" podCreationTimestamp="2026-02-27 17:17:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:19.90649236 +0000 UTC m=+1438.422289947" watchObservedRunningTime="2026-02-27 17:17:19.913709393 +0000 UTC m=+1438.429506980" Feb 27 17:17:19 crc kubenswrapper[4708]: I0227 17:17:19.931048 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-5p4n2" podStartSLOduration=2.931029529 podStartE2EDuration="2.931029529s" podCreationTimestamp="2026-02-27 17:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:19.922249783 +0000 UTC m=+1438.438047370" watchObservedRunningTime="2026-02-27 17:17:19.931029529 +0000 UTC m=+1438.446827116" Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.268615 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80da6b16-2bf9-4528-86cc-a5e9a4e0187a" path="/var/lib/kubelet/pods/80da6b16-2bf9-4528-86cc-a5e9a4e0187a/volumes" Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.906757 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" event={"ID":"46ded50a-aa4c-47e7-8768-82bb22fff933","Type":"ContainerStarted","Data":"3441d2f5c5236cbb46f4894f4fad6bb5a2106ef2aeb64e665aaf4ae0216a8c3d"} Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.907447 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.907461 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.909609 4708 generic.go:334] "Generic (PLEG): container finished" podID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerID="d55a7a08666fab43e70b497c7e6ef9b5949f9e5559045907eb71973edbc42ae8" exitCode=0 Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.909666 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-556cb97757-rbj2s" event={"ID":"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5","Type":"ContainerDied","Data":"d55a7a08666fab43e70b497c7e6ef9b5949f9e5559045907eb71973edbc42ae8"} Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.911912 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fe2a72f-1849-4e9a-a275-8d92879371e8","Type":"ContainerStarted","Data":"417966c8291053de3f9df5900c6f91822e535572b4b59ebcbbe6c070336d0829"} Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.913435 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" event={"ID":"e1dda127-a3f5-474b-b992-a26590e8507b","Type":"ContainerStarted","Data":"0249e25a23b07a0e0dce118071e141133b15967274c28d05058d3823a52ad65a"} Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.914279 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.915831 4708 generic.go:334] "Generic (PLEG): container finished" podID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerID="128b26bd67d7107d59156662763792b8fd1281bd074fbdebafa8650b6a50ce0f" exitCode=0 Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.917312 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d047b4cb-8a38-4b0b-b667-0b78aeb2a166","Type":"ContainerDied","Data":"128b26bd67d7107d59156662763792b8fd1281bd074fbdebafa8650b6a50ce0f"} Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.951414 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" podStartSLOduration=3.951396318 podStartE2EDuration="3.951396318s" podCreationTimestamp="2026-02-27 17:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:20.928220567 +0000 UTC m=+1439.444018154" watchObservedRunningTime="2026-02-27 17:17:20.951396318 +0000 UTC m=+1439.467193905" Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.952173 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" podStartSLOduration=2.9521684390000003 podStartE2EDuration="2.952168439s" podCreationTimestamp="2026-02-27 17:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:20.9504008 +0000 UTC m=+1439.466198387" watchObservedRunningTime="2026-02-27 17:17:20.952168439 +0000 UTC m=+1439.467966026" Feb 27 17:17:20 crc kubenswrapper[4708]: I0227 17:17:20.992685 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.225123 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.226304 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.339826 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-ovndb-tls-certs\") pod \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.339954 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppnt9\" (UniqueName: \"kubernetes.io/projected/83cd2564-348e-472f-b75c-10ccf48a876b-kube-api-access-ppnt9\") pod \"83cd2564-348e-472f-b75c-10ccf48a876b\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340011 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-internal-tls-certs\") pod \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340056 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-nb\") pod \"83cd2564-348e-472f-b75c-10ccf48a876b\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340171 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-combined-ca-bundle\") pod \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340220 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-swift-storage-0\") pod \"83cd2564-348e-472f-b75c-10ccf48a876b\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340245 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-public-tls-certs\") pod \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340317 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-sb\") pod \"83cd2564-348e-472f-b75c-10ccf48a876b\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340344 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-config\") pod \"83cd2564-348e-472f-b75c-10ccf48a876b\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340379 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-config\") pod \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340406 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-httpd-config\") pod \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340427 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-svc\") pod \"83cd2564-348e-472f-b75c-10ccf48a876b\" (UID: \"83cd2564-348e-472f-b75c-10ccf48a876b\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.340469 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvc6d\" (UniqueName: \"kubernetes.io/projected/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-kube-api-access-jvc6d\") pod \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\" (UID: \"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5\") " Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.366044 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" (UID: "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.406200 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83cd2564-348e-472f-b75c-10ccf48a876b-kube-api-access-ppnt9" (OuterVolumeSpecName: "kube-api-access-ppnt9") pod "83cd2564-348e-472f-b75c-10ccf48a876b" (UID: "83cd2564-348e-472f-b75c-10ccf48a876b"). InnerVolumeSpecName "kube-api-access-ppnt9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.420046 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-kube-api-access-jvc6d" (OuterVolumeSpecName: "kube-api-access-jvc6d") pod "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" (UID: "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5"). InnerVolumeSpecName "kube-api-access-jvc6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.450167 4708 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.450403 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvc6d\" (UniqueName: \"kubernetes.io/projected/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-kube-api-access-jvc6d\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.450412 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppnt9\" (UniqueName: \"kubernetes.io/projected/83cd2564-348e-472f-b75c-10ccf48a876b-kube-api-access-ppnt9\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.655066 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" (UID: "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.658933 4708 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.714319 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" (UID: "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.724449 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" (UID: "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.753328 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "83cd2564-348e-472f-b75c-10ccf48a876b" (UID: "83cd2564-348e-472f-b75c-10ccf48a876b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.760362 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.760394 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.760404 4708 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.760436 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-config" (OuterVolumeSpecName: "config") pod "83cd2564-348e-472f-b75c-10ccf48a876b" (UID: "83cd2564-348e-472f-b75c-10ccf48a876b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.781622 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "83cd2564-348e-472f-b75c-10ccf48a876b" (UID: "83cd2564-348e-472f-b75c-10ccf48a876b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.783375 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "83cd2564-348e-472f-b75c-10ccf48a876b" (UID: "83cd2564-348e-472f-b75c-10ccf48a876b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.785544 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-config" (OuterVolumeSpecName: "config") pod "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" (UID: "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.803482 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" (UID: "d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.811394 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "83cd2564-348e-472f-b75c-10ccf48a876b" (UID: "83cd2564-348e-472f-b75c-10ccf48a876b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.862377 4708 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.862408 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.862419 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.862428 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.862436 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/83cd2564-348e-472f-b75c-10ccf48a876b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.862444 4708 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.930430 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" event={"ID":"83cd2564-348e-472f-b75c-10ccf48a876b","Type":"ContainerDied","Data":"6cda9fe9560f9f89a0b297b04cbf6afb108135671033bfdd975a4ac3ebe27b15"} Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.930482 4708 scope.go:117] "RemoveContainer" containerID="148d3c6761522cd028862b8880bf4dd7a822fcc83d7c2049bf549a44d7440b8a" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.930525 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85ff748b95-nq55v" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.946434 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-556cb97757-rbj2s" Feb 27 17:17:21 crc kubenswrapper[4708]: I0227 17:17:21.949056 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-556cb97757-rbj2s" event={"ID":"d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5","Type":"ContainerDied","Data":"dc90c89f14a9541d16a137f0592f2d66bf4982a4aae577943ba5c27d252731ad"} Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.011278 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nq55v"] Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.021229 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-85ff748b95-nq55v"] Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.030361 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-556cb97757-rbj2s"] Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.037990 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-556cb97757-rbj2s"] Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.235372 4708 scope.go:117] "RemoveContainer" containerID="ce682dc09e4d5f957a18d41066317899c0c870b411c993a2c48812f3a73ea7e1" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.250968 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83cd2564-348e-472f-b75c-10ccf48a876b" path="/var/lib/kubelet/pods/83cd2564-348e-472f-b75c-10ccf48a876b/volumes" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.251831 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" path="/var/lib/kubelet/pods/d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5/volumes" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.299476 4708 scope.go:117] "RemoveContainer" containerID="d55a7a08666fab43e70b497c7e6ef9b5949f9e5559045907eb71973edbc42ae8" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.741791 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.802565 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-run-httpd\") pod \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.802628 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppg6d\" (UniqueName: \"kubernetes.io/projected/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-kube-api-access-ppg6d\") pod \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.802798 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-log-httpd\") pod \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.802856 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-config-data\") pod \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.802901 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-combined-ca-bundle\") pod \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.802957 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-sg-core-conf-yaml\") pod \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.802992 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-scripts\") pod \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\" (UID: \"d047b4cb-8a38-4b0b-b667-0b78aeb2a166\") " Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.803350 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d047b4cb-8a38-4b0b-b667-0b78aeb2a166" (UID: "d047b4cb-8a38-4b0b-b667-0b78aeb2a166"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.803777 4708 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.803994 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d047b4cb-8a38-4b0b-b667-0b78aeb2a166" (UID: "d047b4cb-8a38-4b0b-b667-0b78aeb2a166"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.809577 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d047b4cb-8a38-4b0b-b667-0b78aeb2a166" (UID: "d047b4cb-8a38-4b0b-b667-0b78aeb2a166"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.812372 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-kube-api-access-ppg6d" (OuterVolumeSpecName: "kube-api-access-ppg6d") pod "d047b4cb-8a38-4b0b-b667-0b78aeb2a166" (UID: "d047b4cb-8a38-4b0b-b667-0b78aeb2a166"). InnerVolumeSpecName "kube-api-access-ppg6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.813470 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-scripts" (OuterVolumeSpecName: "scripts") pod "d047b4cb-8a38-4b0b-b667-0b78aeb2a166" (UID: "d047b4cb-8a38-4b0b-b667-0b78aeb2a166"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.880064 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d047b4cb-8a38-4b0b-b667-0b78aeb2a166" (UID: "d047b4cb-8a38-4b0b-b667-0b78aeb2a166"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.906104 4708 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.906136 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppg6d\" (UniqueName: \"kubernetes.io/projected/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-kube-api-access-ppg6d\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.906147 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.906157 4708 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.906165 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.920923 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-config-data" (OuterVolumeSpecName: "config-data") pod "d047b4cb-8a38-4b0b-b667-0b78aeb2a166" (UID: "d047b4cb-8a38-4b0b-b667-0b78aeb2a166"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.967548 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b688b6d95-78fb7" event={"ID":"317368d9-8188-4337-9a05-e504c8e90b84","Type":"ContainerStarted","Data":"3ac5b3b5e1b38dc99203801c6572e8636e692f37953feed2ebfd0cbe5f3d7f50"} Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.967593 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5b688b6d95-78fb7" event={"ID":"317368d9-8188-4337-9a05-e504c8e90b84","Type":"ContainerStarted","Data":"02ce31399c2f76ee58863c287ac00c2b3071911c77ec0c84ac245623c2b70e3b"} Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.972325 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" event={"ID":"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd","Type":"ContainerStarted","Data":"8369c47f46de3fe61f417684c3fdf47f2c68323113e4de3366392843fe71b901"} Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.974114 4708 generic.go:334] "Generic (PLEG): container finished" podID="5375b346-0435-45a2-bc67-f966299a9f4f" containerID="c0826cfc80041253dde32adc62c0129e47e3f9f59f58071e7a30056235d0f416" exitCode=0 Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.974161 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5p4n2" event={"ID":"5375b346-0435-45a2-bc67-f966299a9f4f","Type":"ContainerDied","Data":"c0826cfc80041253dde32adc62c0129e47e3f9f59f58071e7a30056235d0f416"} Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.976731 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.976715 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d047b4cb-8a38-4b0b-b667-0b78aeb2a166","Type":"ContainerDied","Data":"8a5346cdee744ed3f3957aefba2ee609d1109de637e189b1dda554792bdcac72"} Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.976873 4708 scope.go:117] "RemoveContainer" containerID="6f45e887963e9c65f2005b7d12772669abeb9319813f012ecbadf5192b76c7a3" Feb 27 17:17:22 crc kubenswrapper[4708]: I0227 17:17:22.982211 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"102af832-14be-4626-9549-7e6fdd8abe4f","Type":"ContainerStarted","Data":"8d49d39b1fd44df00882341b59bd7d3621df88ec06e1b8e2086e41bd6b5ba118"} Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:22.999839 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5b688b6d95-78fb7" podStartSLOduration=5.21405209 podStartE2EDuration="9.99982107s" podCreationTimestamp="2026-02-27 17:17:13 +0000 UTC" firstStartedPulling="2026-02-27 17:17:17.495907363 +0000 UTC m=+1436.011704940" lastFinishedPulling="2026-02-27 17:17:22.281676333 +0000 UTC m=+1440.797473920" observedRunningTime="2026-02-27 17:17:22.986960858 +0000 UTC m=+1441.502758445" watchObservedRunningTime="2026-02-27 17:17:22.99982107 +0000 UTC m=+1441.515618647" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.009576 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d047b4cb-8a38-4b0b-b667-0b78aeb2a166-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.090758 4708 scope.go:117] "RemoveContainer" containerID="128b26bd67d7107d59156662763792b8fd1281bd074fbdebafa8650b6a50ce0f" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.098190 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.123618 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.134780 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:23 crc kubenswrapper[4708]: E0227 17:17:23.135334 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="proxy-httpd" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.135353 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="proxy-httpd" Feb 27 17:17:23 crc kubenswrapper[4708]: E0227 17:17:23.135385 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="ceilometer-notification-agent" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.135393 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="ceilometer-notification-agent" Feb 27 17:17:23 crc kubenswrapper[4708]: E0227 17:17:23.135407 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-api" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.135413 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-api" Feb 27 17:17:23 crc kubenswrapper[4708]: E0227 17:17:23.135425 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-httpd" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.135432 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-httpd" Feb 27 17:17:23 crc kubenswrapper[4708]: E0227 17:17:23.135444 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80da6b16-2bf9-4528-86cc-a5e9a4e0187a" containerName="init" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.135652 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="80da6b16-2bf9-4528-86cc-a5e9a4e0187a" containerName="init" Feb 27 17:17:23 crc kubenswrapper[4708]: E0227 17:17:23.135739 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83cd2564-348e-472f-b75c-10ccf48a876b" containerName="init" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.135748 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="83cd2564-348e-472f-b75c-10ccf48a876b" containerName="init" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.140775 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="proxy-httpd" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.140799 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" containerName="ceilometer-notification-agent" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.140806 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="83cd2564-348e-472f-b75c-10ccf48a876b" containerName="init" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.140818 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-httpd" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.140831 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="80da6b16-2bf9-4528-86cc-a5e9a4e0187a" containerName="init" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.140854 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d59cc7e7-a9d8-4aa0-a65d-9d73d053e5d5" containerName="neutron-api" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.142982 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.147420 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.148448 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.157606 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.215633 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-scripts\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.215689 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-config-data\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.215729 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pngsd\" (UniqueName: \"kubernetes.io/projected/cc027c08-56ee-4816-b983-daa9250ba660-kube-api-access-pngsd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.215764 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.215788 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-run-httpd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.215814 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-log-httpd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.215899 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.316938 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-run-httpd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.317251 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-log-httpd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.317325 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.317378 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-scripts\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.317412 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-config-data\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.317480 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pngsd\" (UniqueName: \"kubernetes.io/projected/cc027c08-56ee-4816-b983-daa9250ba660-kube-api-access-pngsd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.317515 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.319913 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-log-httpd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.320154 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-run-httpd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.321985 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-scripts\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.322930 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-config-data\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.326168 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.327345 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.336738 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pngsd\" (UniqueName: \"kubernetes.io/projected/cc027c08-56ee-4816-b983-daa9250ba660-kube-api-access-pngsd\") pod \"ceilometer-0\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.481314 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.994319 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"102af832-14be-4626-9549-7e6fdd8abe4f","Type":"ContainerStarted","Data":"59289c75815dbc7e3f32916b080fd3d52435ae6ee823668d89691240c4624a4a"} Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.996516 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" event={"ID":"dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd","Type":"ContainerStarted","Data":"bc1e458c013bc49b3f647ccdaa95e2cd5dbe1b59cb08dbf178a05409336798cf"} Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.999578 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api-log" containerID="cri-o://417966c8291053de3f9df5900c6f91822e535572b4b59ebcbbe6c070336d0829" gracePeriod=30 Feb 27 17:17:23 crc kubenswrapper[4708]: I0227 17:17:23.999787 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fe2a72f-1849-4e9a-a275-8d92879371e8","Type":"ContainerStarted","Data":"358899e6111480be262e5e535103af94cd584d03bab99a450cd74b65bbb5572b"} Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.000277 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.000312 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api" containerID="cri-o://358899e6111480be262e5e535103af94cd584d03bab99a450cd74b65bbb5572b" gracePeriod=30 Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.056890 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=5.9299518970000005 podStartE2EDuration="7.056871309s" podCreationTimestamp="2026-02-27 17:17:17 +0000 UTC" firstStartedPulling="2026-02-27 17:17:19.02737436 +0000 UTC m=+1437.543171947" lastFinishedPulling="2026-02-27 17:17:20.154293772 +0000 UTC m=+1438.670091359" observedRunningTime="2026-02-27 17:17:24.043174414 +0000 UTC m=+1442.558972001" watchObservedRunningTime="2026-02-27 17:17:24.056871309 +0000 UTC m=+1442.572668896" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.071621 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-777c49d4fd-pzrvc" podStartSLOduration=6.236113396 podStartE2EDuration="11.071604683s" podCreationTimestamp="2026-02-27 17:17:13 +0000 UTC" firstStartedPulling="2026-02-27 17:17:17.461424324 +0000 UTC m=+1435.977221911" lastFinishedPulling="2026-02-27 17:17:22.296915611 +0000 UTC m=+1440.812713198" observedRunningTime="2026-02-27 17:17:24.066597902 +0000 UTC m=+1442.582395489" watchObservedRunningTime="2026-02-27 17:17:24.071604683 +0000 UTC m=+1442.587402270" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.108953 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=6.108932871 podStartE2EDuration="6.108932871s" podCreationTimestamp="2026-02-27 17:17:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:24.097351116 +0000 UTC m=+1442.613148703" watchObservedRunningTime="2026-02-27 17:17:24.108932871 +0000 UTC m=+1442.624730458" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.165328 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.269031 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d047b4cb-8a38-4b0b-b667-0b78aeb2a166" path="/var/lib/kubelet/pods/d047b4cb-8a38-4b0b-b667-0b78aeb2a166/volumes" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.578509 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.670955 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-combined-ca-bundle\") pod \"5375b346-0435-45a2-bc67-f966299a9f4f\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.671098 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dzfps\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-kube-api-access-dzfps\") pod \"5375b346-0435-45a2-bc67-f966299a9f4f\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.672105 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-scripts\") pod \"5375b346-0435-45a2-bc67-f966299a9f4f\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.672148 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-config-data\") pod \"5375b346-0435-45a2-bc67-f966299a9f4f\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.672284 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-certs\") pod \"5375b346-0435-45a2-bc67-f966299a9f4f\" (UID: \"5375b346-0435-45a2-bc67-f966299a9f4f\") " Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.685034 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-kube-api-access-dzfps" (OuterVolumeSpecName: "kube-api-access-dzfps") pod "5375b346-0435-45a2-bc67-f966299a9f4f" (UID: "5375b346-0435-45a2-bc67-f966299a9f4f"). InnerVolumeSpecName "kube-api-access-dzfps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.703753 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-scripts" (OuterVolumeSpecName: "scripts") pod "5375b346-0435-45a2-bc67-f966299a9f4f" (UID: "5375b346-0435-45a2-bc67-f966299a9f4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.707755 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-certs" (OuterVolumeSpecName: "certs") pod "5375b346-0435-45a2-bc67-f966299a9f4f" (UID: "5375b346-0435-45a2-bc67-f966299a9f4f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.733016 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-config-data" (OuterVolumeSpecName: "config-data") pod "5375b346-0435-45a2-bc67-f966299a9f4f" (UID: "5375b346-0435-45a2-bc67-f966299a9f4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.779619 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.779655 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dzfps\" (UniqueName: \"kubernetes.io/projected/5375b346-0435-45a2-bc67-f966299a9f4f-kube-api-access-dzfps\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.779667 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.779675 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.779798 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5375b346-0435-45a2-bc67-f966299a9f4f" (UID: "5375b346-0435-45a2-bc67-f966299a9f4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:24 crc kubenswrapper[4708]: I0227 17:17:24.883061 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5375b346-0435-45a2-bc67-f966299a9f4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.014717 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-5p4n2" event={"ID":"5375b346-0435-45a2-bc67-f966299a9f4f","Type":"ContainerDied","Data":"6cd02f40f7eb3045b349839ba70bf67ff2952bab3f34721df035a32d31860532"} Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.014756 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6cd02f40f7eb3045b349839ba70bf67ff2952bab3f34721df035a32d31860532" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.014812 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-5p4n2" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.019326 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerStarted","Data":"7c6478e9982d18b252bb982a0023fe74fd9b3e644b48f396ad52ca3ccc2a7153"} Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.042005 4708 generic.go:334] "Generic (PLEG): container finished" podID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerID="358899e6111480be262e5e535103af94cd584d03bab99a450cd74b65bbb5572b" exitCode=0 Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.042045 4708 generic.go:334] "Generic (PLEG): container finished" podID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerID="417966c8291053de3f9df5900c6f91822e535572b4b59ebcbbe6c070336d0829" exitCode=143 Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.043013 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fe2a72f-1849-4e9a-a275-8d92879371e8","Type":"ContainerDied","Data":"358899e6111480be262e5e535103af94cd584d03bab99a450cd74b65bbb5572b"} Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.043045 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fe2a72f-1849-4e9a-a275-8d92879371e8","Type":"ContainerDied","Data":"417966c8291053de3f9df5900c6f91822e535572b4b59ebcbbe6c070336d0829"} Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.067943 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.191702 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l24cv\" (UniqueName: \"kubernetes.io/projected/3fe2a72f-1849-4e9a-a275-8d92879371e8-kube-api-access-l24cv\") pod \"3fe2a72f-1849-4e9a-a275-8d92879371e8\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.191767 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fe2a72f-1849-4e9a-a275-8d92879371e8-etc-machine-id\") pod \"3fe2a72f-1849-4e9a-a275-8d92879371e8\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.191819 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-combined-ca-bundle\") pod \"3fe2a72f-1849-4e9a-a275-8d92879371e8\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.191867 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe2a72f-1849-4e9a-a275-8d92879371e8-logs\") pod \"3fe2a72f-1849-4e9a-a275-8d92879371e8\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.191894 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data\") pod \"3fe2a72f-1849-4e9a-a275-8d92879371e8\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.191948 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3fe2a72f-1849-4e9a-a275-8d92879371e8-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3fe2a72f-1849-4e9a-a275-8d92879371e8" (UID: "3fe2a72f-1849-4e9a-a275-8d92879371e8"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.191985 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data-custom\") pod \"3fe2a72f-1849-4e9a-a275-8d92879371e8\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.192071 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-scripts\") pod \"3fe2a72f-1849-4e9a-a275-8d92879371e8\" (UID: \"3fe2a72f-1849-4e9a-a275-8d92879371e8\") " Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.192658 4708 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3fe2a72f-1849-4e9a-a275-8d92879371e8-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.195158 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fe2a72f-1849-4e9a-a275-8d92879371e8-logs" (OuterVolumeSpecName: "logs") pod "3fe2a72f-1849-4e9a-a275-8d92879371e8" (UID: "3fe2a72f-1849-4e9a-a275-8d92879371e8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.200700 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "3fe2a72f-1849-4e9a-a275-8d92879371e8" (UID: "3fe2a72f-1849-4e9a-a275-8d92879371e8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.200912 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-scripts" (OuterVolumeSpecName: "scripts") pod "3fe2a72f-1849-4e9a-a275-8d92879371e8" (UID: "3fe2a72f-1849-4e9a-a275-8d92879371e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.207276 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fe2a72f-1849-4e9a-a275-8d92879371e8-kube-api-access-l24cv" (OuterVolumeSpecName: "kube-api-access-l24cv") pod "3fe2a72f-1849-4e9a-a275-8d92879371e8" (UID: "3fe2a72f-1849-4e9a-a275-8d92879371e8"). InnerVolumeSpecName "kube-api-access-l24cv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.248180 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3fe2a72f-1849-4e9a-a275-8d92879371e8" (UID: "3fe2a72f-1849-4e9a-a275-8d92879371e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:25 crc kubenswrapper[4708]: E0227 17:17:25.253752 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5375b346_0435_45a2_bc67_f966299a9f4f.slice/crio-6cd02f40f7eb3045b349839ba70bf67ff2952bab3f34721df035a32d31860532\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5375b346_0435_45a2_bc67_f966299a9f4f.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.295216 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l24cv\" (UniqueName: \"kubernetes.io/projected/3fe2a72f-1849-4e9a-a275-8d92879371e8-kube-api-access-l24cv\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.295238 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.295248 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3fe2a72f-1849-4e9a-a275-8d92879371e8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.295257 4708 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.295265 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.305883 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:25 crc kubenswrapper[4708]: E0227 17:17:25.306207 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5375b346-0435-45a2-bc67-f966299a9f4f" containerName="cloudkitty-storageinit" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.306225 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5375b346-0435-45a2-bc67-f966299a9f4f" containerName="cloudkitty-storageinit" Feb 27 17:17:25 crc kubenswrapper[4708]: E0227 17:17:25.306246 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.306253 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api" Feb 27 17:17:25 crc kubenswrapper[4708]: E0227 17:17:25.306264 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api-log" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.306270 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api-log" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.306462 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api-log" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.306485 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5375b346-0435-45a2-bc67-f966299a9f4f" containerName="cloudkitty-storageinit" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.306496 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" containerName="cinder-api" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.307585 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.314445 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.314555 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.314454 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.314717 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.314664 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2sp9f" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.321509 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.330243 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data" (OuterVolumeSpecName: "config-data") pod "3fe2a72f-1849-4e9a-a275-8d92879371e8" (UID: "3fe2a72f-1849-4e9a-a275-8d92879371e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.396900 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jdbxl"] Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.397146 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" podUID="e1dda127-a3f5-474b-b992-a26590e8507b" containerName="dnsmasq-dns" containerID="cri-o://0249e25a23b07a0e0dce118071e141133b15967274c28d05058d3823a52ad65a" gracePeriod=10 Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.398894 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.401297 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-certs\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.401346 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlq55\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-kube-api-access-wlq55\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.401527 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.401582 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-scripts\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.401646 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.401700 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.401791 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3fe2a72f-1849-4e9a-a275-8d92879371e8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.455764 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mqlv7"] Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.457419 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.505025 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-certs\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.505557 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlq55\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-kube-api-access-wlq55\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.505739 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.505865 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-scripts\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.505968 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.506056 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.518410 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mqlv7"] Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.523693 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.524251 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-scripts\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.532969 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.539404 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlq55\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-kube-api-access-wlq55\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.539962 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.542282 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.544079 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.572392 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-certs\") pod \"cloudkitty-proc-0\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.572718 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.597919 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.608467 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.608520 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-config\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.608561 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-svc\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.608611 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.608643 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzbbc\" (UniqueName: \"kubernetes.io/projected/562eb33e-5d66-4492-ba9b-dda2b6666471-kube-api-access-wzbbc\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.608668 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.636599 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.709877 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.709919 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-scripts\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.709952 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzbbc\" (UniqueName: \"kubernetes.io/projected/562eb33e-5d66-4492-ba9b-dda2b6666471-kube-api-access-wzbbc\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.709970 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.709993 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710027 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710067 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710109 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eb86d99-f452-4dcb-87a2-2402ded393d4-logs\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710127 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710149 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-config\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710164 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f7mr\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-kube-api-access-4f7mr\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710197 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-certs\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.710216 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-svc\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.711006 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-svc\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.715637 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-nb\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.717571 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-swift-storage-0\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.718098 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-config\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.718636 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-sb\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.741047 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzbbc\" (UniqueName: \"kubernetes.io/projected/562eb33e-5d66-4492-ba9b-dda2b6666471-kube-api-access-wzbbc\") pod \"dnsmasq-dns-67bdc55879-mqlv7\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.816666 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-certs\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.816955 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-scripts\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.816991 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.817037 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.817076 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.817119 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eb86d99-f452-4dcb-87a2-2402ded393d4-logs\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.817157 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4f7mr\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-kube-api-access-4f7mr\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.817962 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eb86d99-f452-4dcb-87a2-2402ded393d4-logs\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.821214 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.826261 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-scripts\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.826478 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.831682 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.834336 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4f7mr\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-kube-api-access-4f7mr\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:25 crc kubenswrapper[4708]: I0227 17:17:25.838402 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-certs\") pod \"cloudkitty-api-0\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.008940 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.096097 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerStarted","Data":"1c5ea90bcd9e0faced67c0695e67549dac22e2c2a5f3e6945d8727033e1384a8"} Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.123705 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.158498 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"3fe2a72f-1849-4e9a-a275-8d92879371e8","Type":"ContainerDied","Data":"7dcb9f727e098495a21394b974984c05f0f871e027cccb51215e829c26dc731d"} Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.158762 4708 scope.go:117] "RemoveContainer" containerID="358899e6111480be262e5e535103af94cd584d03bab99a450cd74b65bbb5572b" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.158983 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.189338 4708 generic.go:334] "Generic (PLEG): container finished" podID="e1dda127-a3f5-474b-b992-a26590e8507b" containerID="0249e25a23b07a0e0dce118071e141133b15967274c28d05058d3823a52ad65a" exitCode=0 Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.189392 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" event={"ID":"e1dda127-a3f5-474b-b992-a26590e8507b","Type":"ContainerDied","Data":"0249e25a23b07a0e0dce118071e141133b15967274c28d05058d3823a52ad65a"} Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.297173 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.309276 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.309338 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.314799 4708 scope.go:117] "RemoveContainer" containerID="417966c8291053de3f9df5900c6f91822e535572b4b59ebcbbe6c070336d0829" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.324162 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:26 crc kubenswrapper[4708]: E0227 17:17:26.324565 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1dda127-a3f5-474b-b992-a26590e8507b" containerName="init" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.324582 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1dda127-a3f5-474b-b992-a26590e8507b" containerName="init" Feb 27 17:17:26 crc kubenswrapper[4708]: E0227 17:17:26.324618 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1dda127-a3f5-474b-b992-a26590e8507b" containerName="dnsmasq-dns" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.324624 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1dda127-a3f5-474b-b992-a26590e8507b" containerName="dnsmasq-dns" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.324829 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1dda127-a3f5-474b-b992-a26590e8507b" containerName="dnsmasq-dns" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.325823 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.335698 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.335890 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.337818 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.381985 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.440281 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-sb\") pod \"e1dda127-a3f5-474b-b992-a26590e8507b\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.440570 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-config\") pod \"e1dda127-a3f5-474b-b992-a26590e8507b\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.440720 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-nb\") pod \"e1dda127-a3f5-474b-b992-a26590e8507b\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.440739 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-svc\") pod \"e1dda127-a3f5-474b-b992-a26590e8507b\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.440881 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-swift-storage-0\") pod \"e1dda127-a3f5-474b-b992-a26590e8507b\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.440922 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f8z86\" (UniqueName: \"kubernetes.io/projected/e1dda127-a3f5-474b-b992-a26590e8507b-kube-api-access-f8z86\") pod \"e1dda127-a3f5-474b-b992-a26590e8507b\" (UID: \"e1dda127-a3f5-474b-b992-a26590e8507b\") " Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441145 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-config-data\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441186 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-config-data-custom\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441213 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395e91ca-8629-4557-bcb7-f84d7f61b61d-logs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441237 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441286 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395e91ca-8629-4557-bcb7-f84d7f61b61d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441328 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7hr6\" (UniqueName: \"kubernetes.io/projected/395e91ca-8629-4557-bcb7-f84d7f61b61d-kube-api-access-l7hr6\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441346 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441378 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.441402 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-scripts\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.457473 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1dda127-a3f5-474b-b992-a26590e8507b-kube-api-access-f8z86" (OuterVolumeSpecName: "kube-api-access-f8z86") pod "e1dda127-a3f5-474b-b992-a26590e8507b" (UID: "e1dda127-a3f5-474b-b992-a26590e8507b"). InnerVolumeSpecName "kube-api-access-f8z86". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.526082 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-config" (OuterVolumeSpecName: "config") pod "e1dda127-a3f5-474b-b992-a26590e8507b" (UID: "e1dda127-a3f5-474b-b992-a26590e8507b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.540390 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.544152 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e1dda127-a3f5-474b-b992-a26590e8507b" (UID: "e1dda127-a3f5-474b-b992-a26590e8507b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545316 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545369 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395e91ca-8629-4557-bcb7-f84d7f61b61d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545389 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7hr6\" (UniqueName: \"kubernetes.io/projected/395e91ca-8629-4557-bcb7-f84d7f61b61d-kube-api-access-l7hr6\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545407 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545465 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-scripts\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545551 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-config-data\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545575 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-config-data-custom\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545600 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395e91ca-8629-4557-bcb7-f84d7f61b61d-logs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545651 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545663 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f8z86\" (UniqueName: \"kubernetes.io/projected/e1dda127-a3f5-474b-b992-a26590e8507b-kube-api-access-f8z86\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.545673 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.550680 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/395e91ca-8629-4557-bcb7-f84d7f61b61d-etc-machine-id\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.565445 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395e91ca-8629-4557-bcb7-f84d7f61b61d-logs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.567434 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.567484 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-config-data-custom\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.574669 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-config-data\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.581746 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.583368 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-scripts\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.589303 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7hr6\" (UniqueName: \"kubernetes.io/projected/395e91ca-8629-4557-bcb7-f84d7f61b61d-kube-api-access-l7hr6\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.612655 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e1dda127-a3f5-474b-b992-a26590e8507b" (UID: "e1dda127-a3f5-474b-b992-a26590e8507b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.625172 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e1dda127-a3f5-474b-b992-a26590e8507b" (UID: "e1dda127-a3f5-474b-b992-a26590e8507b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.632444 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/395e91ca-8629-4557-bcb7-f84d7f61b61d-public-tls-certs\") pod \"cinder-api-0\" (UID: \"395e91ca-8629-4557-bcb7-f84d7f61b61d\") " pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.656865 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.656896 4708 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.680480 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e1dda127-a3f5-474b-b992-a26590e8507b" (UID: "e1dda127-a3f5-474b-b992-a26590e8507b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.717214 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.761158 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e1dda127-a3f5-474b-b992-a26590e8507b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:26 crc kubenswrapper[4708]: I0227 17:17:26.868768 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mqlv7"] Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.075891 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:27 crc kubenswrapper[4708]: W0227 17:17:27.128395 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5eb86d99_f452_4dcb_87a2_2402ded393d4.slice/crio-17c7cafe15b056cfb33e00bdc3376f8688c6dab72f7447a26d9d78b8a79cb21d WatchSource:0}: Error finding container 17c7cafe15b056cfb33e00bdc3376f8688c6dab72f7447a26d9d78b8a79cb21d: Status 404 returned error can't find the container with id 17c7cafe15b056cfb33e00bdc3376f8688c6dab72f7447a26d9d78b8a79cb21d Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.212071 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" event={"ID":"562eb33e-5d66-4492-ba9b-dda2b6666471","Type":"ContainerStarted","Data":"f81035c3adb51c26073a5d5f1510c1d0eb02762a99e4fcf29ebc18e0ce491b26"} Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.213078 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"bdff2588-dcf5-43de-9d14-44da1a137a87","Type":"ContainerStarted","Data":"788e426a18c3f3cdb354259a22ff46a14146e6b437e691bd680554a85895f7ca"} Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.214452 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerStarted","Data":"6cb680491f190ad1dacd755ef7d397278a9621f1430b9537937fe38ad63885d7"} Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.216446 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5eb86d99-f452-4dcb-87a2-2402ded393d4","Type":"ContainerStarted","Data":"17c7cafe15b056cfb33e00bdc3376f8688c6dab72f7447a26d9d78b8a79cb21d"} Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.217779 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" event={"ID":"e1dda127-a3f5-474b-b992-a26590e8507b","Type":"ContainerDied","Data":"a04eba9c32c9f9f1e5b06e509a986bbdccd39b7b678b1e67a2d1509f03ceff30"} Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.217805 4708 scope.go:117] "RemoveContainer" containerID="0249e25a23b07a0e0dce118071e141133b15967274c28d05058d3823a52ad65a" Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.217917 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c9776ccc5-jdbxl" Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.234616 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.314135 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jdbxl"] Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.359012 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c9776ccc5-jdbxl"] Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.384670 4708 scope.go:117] "RemoveContainer" containerID="fe1a156f7e537f23b2251dfe5e120498e46bab142b6130979acef0599c810c31" Feb 27 17:17:27 crc kubenswrapper[4708]: I0227 17:17:27.475045 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.245573 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fe2a72f-1849-4e9a-a275-8d92879371e8" path="/var/lib/kubelet/pods/3fe2a72f-1849-4e9a-a275-8d92879371e8/volumes" Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.246689 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1dda127-a3f5-474b-b992-a26590e8507b" path="/var/lib/kubelet/pods/e1dda127-a3f5-474b-b992-a26590e8507b/volumes" Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.247301 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerStarted","Data":"da0cb185dbec473d3c1dbbd8ed39d08660d3b4e96f193b817bbfd350a49572a9"} Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.261556 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.262000 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5eb86d99-f452-4dcb-87a2-2402ded393d4","Type":"ContainerStarted","Data":"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c"} Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.262031 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5eb86d99-f452-4dcb-87a2-2402ded393d4","Type":"ContainerStarted","Data":"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598"} Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.262249 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.281217 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.287922 4708 generic.go:334] "Generic (PLEG): container finished" podID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerID="1bd87f282992d16b9e22ffb4e7bc9789b5a23a1e8ab43aa0ed8d87fbf488b390" exitCode=0 Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.288984 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" event={"ID":"562eb33e-5d66-4492-ba9b-dda2b6666471","Type":"ContainerDied","Data":"1bd87f282992d16b9e22ffb4e7bc9789b5a23a1e8ab43aa0ed8d87fbf488b390"} Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.323338 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"395e91ca-8629-4557-bcb7-f84d7f61b61d","Type":"ContainerStarted","Data":"850dc5d5c9b6cef681f7bca5e829427abbcc0ee880f48eb52a02f3f0896c7b79"} Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.362423 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=3.362405505 podStartE2EDuration="3.362405505s" podCreationTimestamp="2026-02-27 17:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:28.322517565 +0000 UTC m=+1446.838315142" watchObservedRunningTime="2026-02-27 17:17:28.362405505 +0000 UTC m=+1446.878203092" Feb 27 17:17:28 crc kubenswrapper[4708]: I0227 17:17:28.430444 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:29 crc kubenswrapper[4708]: I0227 17:17:29.203199 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 17:17:29 crc kubenswrapper[4708]: I0227 17:17:29.332358 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" event={"ID":"562eb33e-5d66-4492-ba9b-dda2b6666471","Type":"ContainerStarted","Data":"e768f6c202a42cf223a6c5ebae7a5124171aa2f15f8fc231fc07edab9677ad47"} Feb 27 17:17:29 crc kubenswrapper[4708]: I0227 17:17:29.333665 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:29 crc kubenswrapper[4708]: I0227 17:17:29.337164 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"395e91ca-8629-4557-bcb7-f84d7f61b61d","Type":"ContainerStarted","Data":"be2514805307e28e83d66e30cecb5c66ca93c817fc713d5c99955d9cf2abd1d5"} Feb 27 17:17:29 crc kubenswrapper[4708]: I0227 17:17:29.358874 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" podStartSLOduration=4.35883841 podStartE2EDuration="4.35883841s" podCreationTimestamp="2026-02-27 17:17:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:29.354955171 +0000 UTC m=+1447.870752758" watchObservedRunningTime="2026-02-27 17:17:29.35883841 +0000 UTC m=+1447.874635997" Feb 27 17:17:29 crc kubenswrapper[4708]: I0227 17:17:29.420534 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:29 crc kubenswrapper[4708]: I0227 17:17:29.935743 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.347616 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"395e91ca-8629-4557-bcb7-f84d7f61b61d","Type":"ContainerStarted","Data":"fbe2778af80349984da0fccc11a43428a16545c74ff20d10608cab38612ac56e"} Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.362514 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"bdff2588-dcf5-43de-9d14-44da1a137a87","Type":"ContainerStarted","Data":"3f21dd641d04903eba524a3fb631253d8edb9da59ae5a4ca2504eac9384f1a0f"} Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.372634 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api-log" containerID="cri-o://177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598" gracePeriod=30 Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.373233 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerStarted","Data":"f249410a76c58a4b9bf9f3bfb31ff2585e60570ef35facfb9850e03410e4f7c9"} Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.373551 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="cinder-scheduler" containerID="cri-o://8d49d39b1fd44df00882341b59bd7d3621df88ec06e1b8e2086e41bd6b5ba118" gracePeriod=30 Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.374150 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="probe" containerID="cri-o://59289c75815dbc7e3f32916b080fd3d52435ae6ee823668d89691240c4624a4a" gracePeriod=30 Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.374254 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api" containerID="cri-o://b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c" gracePeriod=30 Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.374567 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.412487 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.412462233 podStartE2EDuration="4.412462233s" podCreationTimestamp="2026-02-27 17:17:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:30.380689021 +0000 UTC m=+1448.896486608" watchObservedRunningTime="2026-02-27 17:17:30.412462233 +0000 UTC m=+1448.928259820" Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.419499 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.5684755900000003 podStartE2EDuration="5.419491711s" podCreationTimestamp="2026-02-27 17:17:25 +0000 UTC" firstStartedPulling="2026-02-27 17:17:26.613120548 +0000 UTC m=+1445.128918135" lastFinishedPulling="2026-02-27 17:17:29.464136669 +0000 UTC m=+1447.979934256" observedRunningTime="2026-02-27 17:17:30.411945969 +0000 UTC m=+1448.927743556" watchObservedRunningTime="2026-02-27 17:17:30.419491711 +0000 UTC m=+1448.935289298" Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.446770 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.471251 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.913802996 podStartE2EDuration="7.471233545s" podCreationTimestamp="2026-02-27 17:17:23 +0000 UTC" firstStartedPulling="2026-02-27 17:17:24.181764248 +0000 UTC m=+1442.697561835" lastFinishedPulling="2026-02-27 17:17:29.739194797 +0000 UTC m=+1448.254992384" observedRunningTime="2026-02-27 17:17:30.459220557 +0000 UTC m=+1448.975018144" watchObservedRunningTime="2026-02-27 17:17:30.471233545 +0000 UTC m=+1448.987031132" Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.597179 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7c6cc57cfd-rj6nd" Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.664573 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-594bc68494-cmml7"] Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.664798 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-594bc68494-cmml7" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api-log" containerID="cri-o://46fc9d23eae1c0a82d436083bb5fdbed5d47c370b34f2bc54b95098ee4666e0e" gracePeriod=30 Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.665179 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-594bc68494-cmml7" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api" containerID="cri-o://c8eb35e2b9a4b1db1b402a991bbdf2cdfe102f9f6a5e195b8a17f8047fa73f76" gracePeriod=30 Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.672682 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-594bc68494-cmml7" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": EOF" Feb 27 17:17:30 crc kubenswrapper[4708]: I0227 17:17:30.735634 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-594bc68494-cmml7" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": EOF" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.325790 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.439072 4708 generic.go:334] "Generic (PLEG): container finished" podID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerID="46fc9d23eae1c0a82d436083bb5fdbed5d47c370b34f2bc54b95098ee4666e0e" exitCode=143 Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.439161 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-594bc68494-cmml7" event={"ID":"1d8544df-a61e-464b-bc9e-9a68908322c8","Type":"ContainerDied","Data":"46fc9d23eae1c0a82d436083bb5fdbed5d47c370b34f2bc54b95098ee4666e0e"} Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.457495 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eb86d99-f452-4dcb-87a2-2402ded393d4-logs\") pod \"5eb86d99-f452-4dcb-87a2-2402ded393d4\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.457563 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f7mr\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-kube-api-access-4f7mr\") pod \"5eb86d99-f452-4dcb-87a2-2402ded393d4\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.457619 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data-custom\") pod \"5eb86d99-f452-4dcb-87a2-2402ded393d4\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.457637 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data\") pod \"5eb86d99-f452-4dcb-87a2-2402ded393d4\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.457663 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-certs\") pod \"5eb86d99-f452-4dcb-87a2-2402ded393d4\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.457709 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-scripts\") pod \"5eb86d99-f452-4dcb-87a2-2402ded393d4\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.457744 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-combined-ca-bundle\") pod \"5eb86d99-f452-4dcb-87a2-2402ded393d4\" (UID: \"5eb86d99-f452-4dcb-87a2-2402ded393d4\") " Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.469421 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5eb86d99-f452-4dcb-87a2-2402ded393d4-logs" (OuterVolumeSpecName: "logs") pod "5eb86d99-f452-4dcb-87a2-2402ded393d4" (UID: "5eb86d99-f452-4dcb-87a2-2402ded393d4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.474386 4708 generic.go:334] "Generic (PLEG): container finished" podID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerID="b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c" exitCode=0 Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.474417 4708 generic.go:334] "Generic (PLEG): container finished" podID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerID="177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598" exitCode=143 Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.475435 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.475928 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5eb86d99-f452-4dcb-87a2-2402ded393d4","Type":"ContainerDied","Data":"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c"} Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.475953 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5eb86d99-f452-4dcb-87a2-2402ded393d4","Type":"ContainerDied","Data":"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598"} Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.475966 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"5eb86d99-f452-4dcb-87a2-2402ded393d4","Type":"ContainerDied","Data":"17c7cafe15b056cfb33e00bdc3376f8688c6dab72f7447a26d9d78b8a79cb21d"} Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.475985 4708 scope.go:117] "RemoveContainer" containerID="b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.476817 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.482782 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-certs" (OuterVolumeSpecName: "certs") pod "5eb86d99-f452-4dcb-87a2-2402ded393d4" (UID: "5eb86d99-f452-4dcb-87a2-2402ded393d4"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.483490 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-scripts" (OuterVolumeSpecName: "scripts") pod "5eb86d99-f452-4dcb-87a2-2402ded393d4" (UID: "5eb86d99-f452-4dcb-87a2-2402ded393d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.484330 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-kube-api-access-4f7mr" (OuterVolumeSpecName: "kube-api-access-4f7mr") pod "5eb86d99-f452-4dcb-87a2-2402ded393d4" (UID: "5eb86d99-f452-4dcb-87a2-2402ded393d4"). InnerVolumeSpecName "kube-api-access-4f7mr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.489994 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5eb86d99-f452-4dcb-87a2-2402ded393d4" (UID: "5eb86d99-f452-4dcb-87a2-2402ded393d4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.534281 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data" (OuterVolumeSpecName: "config-data") pod "5eb86d99-f452-4dcb-87a2-2402ded393d4" (UID: "5eb86d99-f452-4dcb-87a2-2402ded393d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.560266 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5eb86d99-f452-4dcb-87a2-2402ded393d4-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.560304 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f7mr\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-kube-api-access-4f7mr\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.560315 4708 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.560325 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.560333 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/5eb86d99-f452-4dcb-87a2-2402ded393d4-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.560341 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.574995 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5eb86d99-f452-4dcb-87a2-2402ded393d4" (UID: "5eb86d99-f452-4dcb-87a2-2402ded393d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.646302 4708 scope.go:117] "RemoveContainer" containerID="177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.663090 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5eb86d99-f452-4dcb-87a2-2402ded393d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.671922 4708 scope.go:117] "RemoveContainer" containerID="b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c" Feb 27 17:17:31 crc kubenswrapper[4708]: E0227 17:17:31.672306 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c\": container with ID starting with b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c not found: ID does not exist" containerID="b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.672332 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c"} err="failed to get container status \"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c\": rpc error: code = NotFound desc = could not find container \"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c\": container with ID starting with b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c not found: ID does not exist" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.672352 4708 scope.go:117] "RemoveContainer" containerID="177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598" Feb 27 17:17:31 crc kubenswrapper[4708]: E0227 17:17:31.672514 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598\": container with ID starting with 177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598 not found: ID does not exist" containerID="177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.672533 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598"} err="failed to get container status \"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598\": rpc error: code = NotFound desc = could not find container \"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598\": container with ID starting with 177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598 not found: ID does not exist" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.672547 4708 scope.go:117] "RemoveContainer" containerID="b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.672707 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c"} err="failed to get container status \"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c\": rpc error: code = NotFound desc = could not find container \"b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c\": container with ID starting with b43602631e604a218d0c195f500fe5d27059ffabae362620ee692c94454db42c not found: ID does not exist" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.672727 4708 scope.go:117] "RemoveContainer" containerID="177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.672906 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598"} err="failed to get container status \"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598\": rpc error: code = NotFound desc = could not find container \"177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598\": container with ID starting with 177b82cf81f61f39b419d1e2f65a16ba5338094240f0dfd4cfe1ac6b194ae598 not found: ID does not exist" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.832044 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.846449 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.884888 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:31 crc kubenswrapper[4708]: E0227 17:17:31.885276 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.885292 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api" Feb 27 17:17:31 crc kubenswrapper[4708]: E0227 17:17:31.885328 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api-log" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.885335 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api-log" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.885501 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api-log" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.885522 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" containerName="cloudkitty-api" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.886551 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.889208 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.889345 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.889472 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.898218 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968439 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pf5h\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-kube-api-access-7pf5h\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968481 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968508 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968541 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968569 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-scripts\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968610 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968623 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d761a40-6a7c-4691-a079-919d74122b18-logs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968645 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:31 crc kubenswrapper[4708]: I0227 17:17:31.968700 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070654 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070730 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pf5h\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-kube-api-access-7pf5h\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070757 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070776 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070808 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070836 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-scripts\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070888 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070901 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d761a40-6a7c-4691-a079-919d74122b18-logs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.070928 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.082981 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d761a40-6a7c-4691-a079-919d74122b18-logs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.083410 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.086663 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.087092 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.088096 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.090511 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.092293 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.094345 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-scripts\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.094763 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pf5h\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-kube-api-access-7pf5h\") pod \"cloudkitty-api-0\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.214690 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.244519 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eb86d99-f452-4dcb-87a2-2402ded393d4" path="/var/lib/kubelet/pods/5eb86d99-f452-4dcb-87a2-2402ded393d4/volumes" Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.529012 4708 generic.go:334] "Generic (PLEG): container finished" podID="102af832-14be-4626-9549-7e6fdd8abe4f" containerID="59289c75815dbc7e3f32916b080fd3d52435ae6ee823668d89691240c4624a4a" exitCode=0 Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.529303 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"102af832-14be-4626-9549-7e6fdd8abe4f","Type":"ContainerDied","Data":"59289c75815dbc7e3f32916b080fd3d52435ae6ee823668d89691240c4624a4a"} Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.529558 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="bdff2588-dcf5-43de-9d14-44da1a137a87" containerName="cloudkitty-proc" containerID="cri-o://3f21dd641d04903eba524a3fb631253d8edb9da59ae5a4ca2504eac9384f1a0f" gracePeriod=30 Feb 27 17:17:32 crc kubenswrapper[4708]: I0227 17:17:32.692557 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.550733 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0d761a40-6a7c-4691-a079-919d74122b18","Type":"ContainerStarted","Data":"33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6"} Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.551399 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0d761a40-6a7c-4691-a079-919d74122b18","Type":"ContainerStarted","Data":"88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d"} Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.551417 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0d761a40-6a7c-4691-a079-919d74122b18","Type":"ContainerStarted","Data":"9ece77b3e595b8d0aafade86341b2bedb77c8fe62631fa4102e79cf4a7c39b8d"} Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.551476 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.554347 4708 generic.go:334] "Generic (PLEG): container finished" podID="102af832-14be-4626-9549-7e6fdd8abe4f" containerID="8d49d39b1fd44df00882341b59bd7d3621df88ec06e1b8e2086e41bd6b5ba118" exitCode=0 Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.554377 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"102af832-14be-4626-9549-7e6fdd8abe4f","Type":"ContainerDied","Data":"8d49d39b1fd44df00882341b59bd7d3621df88ec06e1b8e2086e41bd6b5ba118"} Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.579004 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.578967129 podStartE2EDuration="2.578967129s" podCreationTimestamp="2026-02-27 17:17:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:33.577633451 +0000 UTC m=+1452.093431058" watchObservedRunningTime="2026-02-27 17:17:33.578967129 +0000 UTC m=+1452.094764736" Feb 27 17:17:33 crc kubenswrapper[4708]: I0227 17:17:33.963657 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.126126 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/102af832-14be-4626-9549-7e6fdd8abe4f-etc-machine-id\") pod \"102af832-14be-4626-9549-7e6fdd8abe4f\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.126182 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-scripts\") pod \"102af832-14be-4626-9549-7e6fdd8abe4f\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.126307 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/102af832-14be-4626-9549-7e6fdd8abe4f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "102af832-14be-4626-9549-7e6fdd8abe4f" (UID: "102af832-14be-4626-9549-7e6fdd8abe4f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.126348 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data\") pod \"102af832-14be-4626-9549-7e6fdd8abe4f\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.126398 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data-custom\") pod \"102af832-14be-4626-9549-7e6fdd8abe4f\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.126595 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5dw7\" (UniqueName: \"kubernetes.io/projected/102af832-14be-4626-9549-7e6fdd8abe4f-kube-api-access-c5dw7\") pod \"102af832-14be-4626-9549-7e6fdd8abe4f\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.126663 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-combined-ca-bundle\") pod \"102af832-14be-4626-9549-7e6fdd8abe4f\" (UID: \"102af832-14be-4626-9549-7e6fdd8abe4f\") " Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.127705 4708 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/102af832-14be-4626-9549-7e6fdd8abe4f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.133630 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "102af832-14be-4626-9549-7e6fdd8abe4f" (UID: "102af832-14be-4626-9549-7e6fdd8abe4f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.133746 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/102af832-14be-4626-9549-7e6fdd8abe4f-kube-api-access-c5dw7" (OuterVolumeSpecName: "kube-api-access-c5dw7") pod "102af832-14be-4626-9549-7e6fdd8abe4f" (UID: "102af832-14be-4626-9549-7e6fdd8abe4f"). InnerVolumeSpecName "kube-api-access-c5dw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.140939 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-scripts" (OuterVolumeSpecName: "scripts") pod "102af832-14be-4626-9549-7e6fdd8abe4f" (UID: "102af832-14be-4626-9549-7e6fdd8abe4f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.201646 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "102af832-14be-4626-9549-7e6fdd8abe4f" (UID: "102af832-14be-4626-9549-7e6fdd8abe4f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.231163 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.231395 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.231406 4708 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.231417 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5dw7\" (UniqueName: \"kubernetes.io/projected/102af832-14be-4626-9549-7e6fdd8abe4f-kube-api-access-c5dw7\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.283933 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data" (OuterVolumeSpecName: "config-data") pod "102af832-14be-4626-9549-7e6fdd8abe4f" (UID: "102af832-14be-4626-9549-7e6fdd8abe4f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.333307 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/102af832-14be-4626-9549-7e6fdd8abe4f-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.567397 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.568810 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"102af832-14be-4626-9549-7e6fdd8abe4f","Type":"ContainerDied","Data":"324b33b6905c6f30863fa933bff76cbee73b4c4203bc2796c83e01660178109d"} Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.568902 4708 scope.go:117] "RemoveContainer" containerID="59289c75815dbc7e3f32916b080fd3d52435ae6ee823668d89691240c4624a4a" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.621046 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.628483 4708 scope.go:117] "RemoveContainer" containerID="8d49d39b1fd44df00882341b59bd7d3621df88ec06e1b8e2086e41bd6b5ba118" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.637624 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.658618 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:34 crc kubenswrapper[4708]: E0227 17:17:34.659095 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="probe" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.659114 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="probe" Feb 27 17:17:34 crc kubenswrapper[4708]: E0227 17:17:34.659147 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="cinder-scheduler" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.659155 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="cinder-scheduler" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.659492 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="probe" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.659507 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" containerName="cinder-scheduler" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.660725 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.665607 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.678649 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.743771 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.744343 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a8a6d87-beea-472a-a795-a8fc5daf0bde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.744496 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-scripts\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.744605 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-config-data\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.744714 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.744826 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5l6w\" (UniqueName: \"kubernetes.io/projected/8a8a6d87-beea-472a-a795-a8fc5daf0bde-kube-api-access-c5l6w\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.846764 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.846888 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a8a6d87-beea-472a-a795-a8fc5daf0bde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.846945 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-scripts\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.846976 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-config-data\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.847007 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.847037 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5l6w\" (UniqueName: \"kubernetes.io/projected/8a8a6d87-beea-472a-a795-a8fc5daf0bde-kube-api-access-c5l6w\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.847698 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8a8a6d87-beea-472a-a795-a8fc5daf0bde-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.853236 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-config-data\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.854686 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-scripts\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.859599 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.866613 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8a8a6d87-beea-472a-a795-a8fc5daf0bde-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.879384 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5l6w\" (UniqueName: \"kubernetes.io/projected/8a8a6d87-beea-472a-a795-a8fc5daf0bde-kube-api-access-c5l6w\") pod \"cinder-scheduler-0\" (UID: \"8a8a6d87-beea-472a-a795-a8fc5daf0bde\") " pod="openstack/cinder-scheduler-0" Feb 27 17:17:34 crc kubenswrapper[4708]: I0227 17:17:34.993948 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.164633 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-594bc68494-cmml7" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": read tcp 10.217.0.2:45114->10.217.0.186:9311: read: connection reset by peer" Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.165137 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-594bc68494-cmml7" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": read tcp 10.217.0.2:45104->10.217.0.186:9311: read: connection reset by peer" Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.165482 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-594bc68494-cmml7" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.186:9311/healthcheck\": dial tcp 10.217.0.186:9311: connect: connection refused" Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.165559 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.668619 4708 generic.go:334] "Generic (PLEG): container finished" podID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerID="c8eb35e2b9a4b1db1b402a991bbdf2cdfe102f9f6a5e195b8a17f8047fa73f76" exitCode=0 Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.669481 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-594bc68494-cmml7" event={"ID":"1d8544df-a61e-464b-bc9e-9a68908322c8","Type":"ContainerDied","Data":"c8eb35e2b9a4b1db1b402a991bbdf2cdfe102f9f6a5e195b8a17f8047fa73f76"} Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.685273 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Feb 27 17:17:35 crc kubenswrapper[4708]: I0227 17:17:35.950966 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.017153 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.085859 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-combined-ca-bundle\") pod \"1d8544df-a61e-464b-bc9e-9a68908322c8\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.085930 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data-custom\") pod \"1d8544df-a61e-464b-bc9e-9a68908322c8\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.085984 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data\") pod \"1d8544df-a61e-464b-bc9e-9a68908322c8\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.086065 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tbrhw\" (UniqueName: \"kubernetes.io/projected/1d8544df-a61e-464b-bc9e-9a68908322c8-kube-api-access-tbrhw\") pod \"1d8544df-a61e-464b-bc9e-9a68908322c8\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.086089 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8544df-a61e-464b-bc9e-9a68908322c8-logs\") pod \"1d8544df-a61e-464b-bc9e-9a68908322c8\" (UID: \"1d8544df-a61e-464b-bc9e-9a68908322c8\") " Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.098906 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-c24g9"] Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.099242 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-fb745b69-c24g9" podUID="5e596e79-d862-49bc-b016-afaaab6828f8" containerName="dnsmasq-dns" containerID="cri-o://0f28d3f200e4a1c615d8818382e481e4969daacbb688eafb8bb06c1d1bd0cfae" gracePeriod=10 Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.101385 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d8544df-a61e-464b-bc9e-9a68908322c8-logs" (OuterVolumeSpecName: "logs") pod "1d8544df-a61e-464b-bc9e-9a68908322c8" (UID: "1d8544df-a61e-464b-bc9e-9a68908322c8"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.104070 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d8544df-a61e-464b-bc9e-9a68908322c8-kube-api-access-tbrhw" (OuterVolumeSpecName: "kube-api-access-tbrhw") pod "1d8544df-a61e-464b-bc9e-9a68908322c8" (UID: "1d8544df-a61e-464b-bc9e-9a68908322c8"). InnerVolumeSpecName "kube-api-access-tbrhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.117134 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "1d8544df-a61e-464b-bc9e-9a68908322c8" (UID: "1d8544df-a61e-464b-bc9e-9a68908322c8"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.148343 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1d8544df-a61e-464b-bc9e-9a68908322c8" (UID: "1d8544df-a61e-464b-bc9e-9a68908322c8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.181984 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data" (OuterVolumeSpecName: "config-data") pod "1d8544df-a61e-464b-bc9e-9a68908322c8" (UID: "1d8544df-a61e-464b-bc9e-9a68908322c8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.189153 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.189205 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tbrhw\" (UniqueName: \"kubernetes.io/projected/1d8544df-a61e-464b-bc9e-9a68908322c8-kube-api-access-tbrhw\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.189221 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d8544df-a61e-464b-bc9e-9a68908322c8-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.189229 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.189238 4708 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/1d8544df-a61e-464b-bc9e-9a68908322c8-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.278048 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="102af832-14be-4626-9549-7e6fdd8abe4f" path="/var/lib/kubelet/pods/102af832-14be-4626-9549-7e6fdd8abe4f/volumes" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.715782 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-594bc68494-cmml7" event={"ID":"1d8544df-a61e-464b-bc9e-9a68908322c8","Type":"ContainerDied","Data":"a9903b0b99bcbaa67241af05bc3c9dcb57a88bd93ecead44eb23e2b3fab5d1b6"} Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.715833 4708 scope.go:117] "RemoveContainer" containerID="c8eb35e2b9a4b1db1b402a991bbdf2cdfe102f9f6a5e195b8a17f8047fa73f76" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.715974 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-594bc68494-cmml7" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.755415 4708 generic.go:334] "Generic (PLEG): container finished" podID="5e596e79-d862-49bc-b016-afaaab6828f8" containerID="0f28d3f200e4a1c615d8818382e481e4969daacbb688eafb8bb06c1d1bd0cfae" exitCode=0 Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.755480 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-c24g9" event={"ID":"5e596e79-d862-49bc-b016-afaaab6828f8","Type":"ContainerDied","Data":"0f28d3f200e4a1c615d8818382e481e4969daacbb688eafb8bb06c1d1bd0cfae"} Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.787899 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-594bc68494-cmml7"] Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.788228 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a8a6d87-beea-472a-a795-a8fc5daf0bde","Type":"ContainerStarted","Data":"fb9a3971af4b44cba88db4314410f050f13b4026af5424ee7f87dc7fb744d334"} Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.788278 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a8a6d87-beea-472a-a795-a8fc5daf0bde","Type":"ContainerStarted","Data":"be6219f966d80aa9fed21a49290c929db847c32e38632bd56ce3e417a73e46e3"} Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.801486 4708 scope.go:117] "RemoveContainer" containerID="46fc9d23eae1c0a82d436083bb5fdbed5d47c370b34f2bc54b95098ee4666e0e" Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.804691 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-594bc68494-cmml7"] Feb 27 17:17:36 crc kubenswrapper[4708]: I0227 17:17:36.891001 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.009599 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-config\") pod \"5e596e79-d862-49bc-b016-afaaab6828f8\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.009751 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-dns-svc\") pod \"5e596e79-d862-49bc-b016-afaaab6828f8\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.009948 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-sb\") pod \"5e596e79-d862-49bc-b016-afaaab6828f8\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.010108 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-nb\") pod \"5e596e79-d862-49bc-b016-afaaab6828f8\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.010131 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d44s7\" (UniqueName: \"kubernetes.io/projected/5e596e79-d862-49bc-b016-afaaab6828f8-kube-api-access-d44s7\") pod \"5e596e79-d862-49bc-b016-afaaab6828f8\" (UID: \"5e596e79-d862-49bc-b016-afaaab6828f8\") " Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.045058 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e596e79-d862-49bc-b016-afaaab6828f8-kube-api-access-d44s7" (OuterVolumeSpecName: "kube-api-access-d44s7") pod "5e596e79-d862-49bc-b016-afaaab6828f8" (UID: "5e596e79-d862-49bc-b016-afaaab6828f8"). InnerVolumeSpecName "kube-api-access-d44s7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.074625 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "5e596e79-d862-49bc-b016-afaaab6828f8" (UID: "5e596e79-d862-49bc-b016-afaaab6828f8"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.088082 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "5e596e79-d862-49bc-b016-afaaab6828f8" (UID: "5e596e79-d862-49bc-b016-afaaab6828f8"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.103509 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "5e596e79-d862-49bc-b016-afaaab6828f8" (UID: "5e596e79-d862-49bc-b016-afaaab6828f8"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.111970 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.111999 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.112030 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d44s7\" (UniqueName: \"kubernetes.io/projected/5e596e79-d862-49bc-b016-afaaab6828f8-kube-api-access-d44s7\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.112040 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.145217 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-config" (OuterVolumeSpecName: "config") pod "5e596e79-d862-49bc-b016-afaaab6828f8" (UID: "5e596e79-d862-49bc-b016-afaaab6828f8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.214033 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5e596e79-d862-49bc-b016-afaaab6828f8-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.491537 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.492108 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.593259 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-54c5f87dbb-t77v4" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.611805 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.677691 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6d5895b968-p7cts"] Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.804736 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8a8a6d87-beea-472a-a795-a8fc5daf0bde","Type":"ContainerStarted","Data":"e16665c60e370dcf4e7c7cdf31f3f28edc250714ae7b438eb57c8a17b8d1b07b"} Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.825262 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fb745b69-c24g9" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.825723 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fb745b69-c24g9" event={"ID":"5e596e79-d862-49bc-b016-afaaab6828f8","Type":"ContainerDied","Data":"ece491d5a7461d06b4c08ea6aa9a369e229cd6f07e5d2fa6a09b72e56fda59bf"} Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.825761 4708 scope.go:117] "RemoveContainer" containerID="0f28d3f200e4a1c615d8818382e481e4969daacbb688eafb8bb06c1d1bd0cfae" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.833342 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.8333203190000003 podStartE2EDuration="3.833320319s" podCreationTimestamp="2026-02-27 17:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:37.829920183 +0000 UTC m=+1456.345717770" watchObservedRunningTime="2026-02-27 17:17:37.833320319 +0000 UTC m=+1456.349117906" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.838663 4708 generic.go:334] "Generic (PLEG): container finished" podID="bdff2588-dcf5-43de-9d14-44da1a137a87" containerID="3f21dd641d04903eba524a3fb631253d8edb9da59ae5a4ca2504eac9384f1a0f" exitCode=0 Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.840591 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"bdff2588-dcf5-43de-9d14-44da1a137a87","Type":"ContainerDied","Data":"3f21dd641d04903eba524a3fb631253d8edb9da59ae5a4ca2504eac9384f1a0f"} Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.858018 4708 scope.go:117] "RemoveContainer" containerID="1bc4eec3450571587e9296a9da6256270cec6c873fe2591f4e8e85a8da8e8bed" Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.940863 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-c24g9"] Feb 27 17:17:37 crc kubenswrapper[4708]: I0227 17:17:37.953658 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fb745b69-c24g9"] Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.178558 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.257077 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" path="/var/lib/kubelet/pods/1d8544df-a61e-464b-bc9e-9a68908322c8/volumes" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.257810 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e596e79-d862-49bc-b016-afaaab6828f8" path="/var/lib/kubelet/pods/5e596e79-d862-49bc-b016-afaaab6828f8/volumes" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.258649 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlq55\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-kube-api-access-wlq55\") pod \"bdff2588-dcf5-43de-9d14-44da1a137a87\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.258698 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data\") pod \"bdff2588-dcf5-43de-9d14-44da1a137a87\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.258769 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-scripts\") pod \"bdff2588-dcf5-43de-9d14-44da1a137a87\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.258811 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-certs\") pod \"bdff2588-dcf5-43de-9d14-44da1a137a87\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.258931 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data-custom\") pod \"bdff2588-dcf5-43de-9d14-44da1a137a87\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.258981 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-combined-ca-bundle\") pod \"bdff2588-dcf5-43de-9d14-44da1a137a87\" (UID: \"bdff2588-dcf5-43de-9d14-44da1a137a87\") " Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.265529 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-scripts" (OuterVolumeSpecName: "scripts") pod "bdff2588-dcf5-43de-9d14-44da1a137a87" (UID: "bdff2588-dcf5-43de-9d14-44da1a137a87"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.267131 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-kube-api-access-wlq55" (OuterVolumeSpecName: "kube-api-access-wlq55") pod "bdff2588-dcf5-43de-9d14-44da1a137a87" (UID: "bdff2588-dcf5-43de-9d14-44da1a137a87"). InnerVolumeSpecName "kube-api-access-wlq55". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.272303 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-certs" (OuterVolumeSpecName: "certs") pod "bdff2588-dcf5-43de-9d14-44da1a137a87" (UID: "bdff2588-dcf5-43de-9d14-44da1a137a87"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.274061 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bdff2588-dcf5-43de-9d14-44da1a137a87" (UID: "bdff2588-dcf5-43de-9d14-44da1a137a87"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.359397 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data" (OuterVolumeSpecName: "config-data") pod "bdff2588-dcf5-43de-9d14-44da1a137a87" (UID: "bdff2588-dcf5-43de-9d14-44da1a137a87"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.362325 4708 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.362350 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlq55\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-kube-api-access-wlq55\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.362362 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.362370 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.362378 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/bdff2588-dcf5-43de-9d14-44da1a137a87-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.374023 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bdff2588-dcf5-43de-9d14-44da1a137a87" (UID: "bdff2588-dcf5-43de-9d14-44da1a137a87"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.464409 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdff2588-dcf5-43de-9d14-44da1a137a87-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.660875 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-597b655d8b-dmxbr" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.854156 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"bdff2588-dcf5-43de-9d14-44da1a137a87","Type":"ContainerDied","Data":"788e426a18c3f3cdb354259a22ff46a14146e6b437e691bd680554a85895f7ca"} Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.854459 4708 scope.go:117] "RemoveContainer" containerID="3f21dd641d04903eba524a3fb631253d8edb9da59ae5a4ca2504eac9384f1a0f" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.854300 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6d5895b968-p7cts" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-log" containerID="cri-o://34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800" gracePeriod=30 Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.854672 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-6d5895b968-p7cts" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-api" containerID="cri-o://a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec" gracePeriod=30 Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.854765 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.908899 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.921689 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.937479 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:38 crc kubenswrapper[4708]: E0227 17:17:38.937936 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdff2588-dcf5-43de-9d14-44da1a137a87" containerName="cloudkitty-proc" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.937949 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdff2588-dcf5-43de-9d14-44da1a137a87" containerName="cloudkitty-proc" Feb 27 17:17:38 crc kubenswrapper[4708]: E0227 17:17:38.937974 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e596e79-d862-49bc-b016-afaaab6828f8" containerName="dnsmasq-dns" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.937981 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e596e79-d862-49bc-b016-afaaab6828f8" containerName="dnsmasq-dns" Feb 27 17:17:38 crc kubenswrapper[4708]: E0227 17:17:38.938002 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api-log" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938008 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api-log" Feb 27 17:17:38 crc kubenswrapper[4708]: E0227 17:17:38.938022 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e596e79-d862-49bc-b016-afaaab6828f8" containerName="init" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938028 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e596e79-d862-49bc-b016-afaaab6828f8" containerName="init" Feb 27 17:17:38 crc kubenswrapper[4708]: E0227 17:17:38.938047 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938054 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938220 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938232 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdff2588-dcf5-43de-9d14-44da1a137a87" containerName="cloudkitty-proc" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938240 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e596e79-d862-49bc-b016-afaaab6828f8" containerName="dnsmasq-dns" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938259 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d8544df-a61e-464b-bc9e-9a68908322c8" containerName="barbican-api-log" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.938978 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.942084 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.955478 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.972717 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-scripts\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.972773 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.972796 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.972881 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-certs\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.972899 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tndks\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-kube-api-access-tndks\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:38 crc kubenswrapper[4708]: I0227 17:17:38.972920 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.073418 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-scripts\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.073463 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.073489 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.073560 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-certs\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.073577 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tndks\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-kube-api-access-tndks\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.073600 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.078441 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.079540 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.079823 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-certs\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.091398 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-scripts\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.092278 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.094579 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tndks\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-kube-api-access-tndks\") pod \"cloudkitty-proc-0\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.314014 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.345306 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.740369 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.864948 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"607cf703-5051-4836-92bd-657dbab39bd4","Type":"ContainerStarted","Data":"a9f71fe981ad286a34f9714080e0779b6e69ec09cd4b41eaef90962c4a9fcef1"} Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.866759 4708 generic.go:334] "Generic (PLEG): container finished" podID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerID="34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800" exitCode=143 Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.866873 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d5895b968-p7cts" event={"ID":"5177dfe3-b55f-4a39-9a6b-392796ed3084","Type":"ContainerDied","Data":"34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800"} Feb 27 17:17:39 crc kubenswrapper[4708]: I0227 17:17:39.994762 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Feb 27 17:17:40 crc kubenswrapper[4708]: I0227 17:17:40.249402 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdff2588-dcf5-43de-9d14-44da1a137a87" path="/var/lib/kubelet/pods/bdff2588-dcf5-43de-9d14-44da1a137a87/volumes" Feb 27 17:17:40 crc kubenswrapper[4708]: I0227 17:17:40.879618 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"607cf703-5051-4836-92bd-657dbab39bd4","Type":"ContainerStarted","Data":"0acb70ddb957b8a9e114d0e8c7558e2447b45e540454c2b0933936cd17f84728"} Feb 27 17:17:40 crc kubenswrapper[4708]: I0227 17:17:40.896507 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.896490881 podStartE2EDuration="2.896490881s" podCreationTimestamp="2026-02-27 17:17:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:40.89218199 +0000 UTC m=+1459.407979577" watchObservedRunningTime="2026-02-27 17:17:40.896490881 +0000 UTC m=+1459.412288458" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.664091 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.665658 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.667801 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.668047 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-f729f" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.668271 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.685476 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.838241 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12701673-c2a0-4e8a-b906-b7e61a49c224-openstack-config-secret\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.838544 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12701673-c2a0-4e8a-b906-b7e61a49c224-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.838627 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12701673-c2a0-4e8a-b906-b7e61a49c224-openstack-config\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.838657 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hd24x\" (UniqueName: \"kubernetes.io/projected/12701673-c2a0-4e8a-b906-b7e61a49c224-kube-api-access-hd24x\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.940055 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12701673-c2a0-4e8a-b906-b7e61a49c224-openstack-config\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.940120 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hd24x\" (UniqueName: \"kubernetes.io/projected/12701673-c2a0-4e8a-b906-b7e61a49c224-kube-api-access-hd24x\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.940201 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12701673-c2a0-4e8a-b906-b7e61a49c224-openstack-config-secret\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.940238 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12701673-c2a0-4e8a-b906-b7e61a49c224-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.941277 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/12701673-c2a0-4e8a-b906-b7e61a49c224-openstack-config\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.947068 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/12701673-c2a0-4e8a-b906-b7e61a49c224-combined-ca-bundle\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.962813 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/12701673-c2a0-4e8a-b906-b7e61a49c224-openstack-config-secret\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:41 crc kubenswrapper[4708]: I0227 17:17:41.966359 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hd24x\" (UniqueName: \"kubernetes.io/projected/12701673-c2a0-4e8a-b906-b7e61a49c224-kube-api-access-hd24x\") pod \"openstackclient\" (UID: \"12701673-c2a0-4e8a-b906-b7e61a49c224\") " pod="openstack/openstackclient" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.055288 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.646637 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.659925 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-scripts\") pod \"5177dfe3-b55f-4a39-9a6b-392796ed3084\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.660012 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-internal-tls-certs\") pod \"5177dfe3-b55f-4a39-9a6b-392796ed3084\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.660062 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-combined-ca-bundle\") pod \"5177dfe3-b55f-4a39-9a6b-392796ed3084\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.660189 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8ntt\" (UniqueName: \"kubernetes.io/projected/5177dfe3-b55f-4a39-9a6b-392796ed3084-kube-api-access-r8ntt\") pod \"5177dfe3-b55f-4a39-9a6b-392796ed3084\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.660223 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-config-data\") pod \"5177dfe3-b55f-4a39-9a6b-392796ed3084\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.660271 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5177dfe3-b55f-4a39-9a6b-392796ed3084-logs\") pod \"5177dfe3-b55f-4a39-9a6b-392796ed3084\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.660300 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-public-tls-certs\") pod \"5177dfe3-b55f-4a39-9a6b-392796ed3084\" (UID: \"5177dfe3-b55f-4a39-9a6b-392796ed3084\") " Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.665729 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5177dfe3-b55f-4a39-9a6b-392796ed3084-logs" (OuterVolumeSpecName: "logs") pod "5177dfe3-b55f-4a39-9a6b-392796ed3084" (UID: "5177dfe3-b55f-4a39-9a6b-392796ed3084"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.677350 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5177dfe3-b55f-4a39-9a6b-392796ed3084-kube-api-access-r8ntt" (OuterVolumeSpecName: "kube-api-access-r8ntt") pod "5177dfe3-b55f-4a39-9a6b-392796ed3084" (UID: "5177dfe3-b55f-4a39-9a6b-392796ed3084"). InnerVolumeSpecName "kube-api-access-r8ntt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.716010 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-scripts" (OuterVolumeSpecName: "scripts") pod "5177dfe3-b55f-4a39-9a6b-392796ed3084" (UID: "5177dfe3-b55f-4a39-9a6b-392796ed3084"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.762930 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8ntt\" (UniqueName: \"kubernetes.io/projected/5177dfe3-b55f-4a39-9a6b-392796ed3084-kube-api-access-r8ntt\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.762963 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5177dfe3-b55f-4a39-9a6b-392796ed3084-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.762975 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.807260 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5177dfe3-b55f-4a39-9a6b-392796ed3084" (UID: "5177dfe3-b55f-4a39-9a6b-392796ed3084"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.812076 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5177dfe3-b55f-4a39-9a6b-392796ed3084" (UID: "5177dfe3-b55f-4a39-9a6b-392796ed3084"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.816973 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-config-data" (OuterVolumeSpecName: "config-data") pod "5177dfe3-b55f-4a39-9a6b-392796ed3084" (UID: "5177dfe3-b55f-4a39-9a6b-392796ed3084"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.830832 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.851101 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "5177dfe3-b55f-4a39-9a6b-392796ed3084" (UID: "5177dfe3-b55f-4a39-9a6b-392796ed3084"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.864756 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.864783 4708 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.864792 4708 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.864803 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5177dfe3-b55f-4a39-9a6b-392796ed3084-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.900296 4708 generic.go:334] "Generic (PLEG): container finished" podID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerID="a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec" exitCode=0 Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.900361 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-6d5895b968-p7cts" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.900369 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d5895b968-p7cts" event={"ID":"5177dfe3-b55f-4a39-9a6b-392796ed3084","Type":"ContainerDied","Data":"a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec"} Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.900497 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-6d5895b968-p7cts" event={"ID":"5177dfe3-b55f-4a39-9a6b-392796ed3084","Type":"ContainerDied","Data":"45a28912731fa3c96e68cdafdb3d30761fb3853615a89e95830969e5cafc7415"} Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.900534 4708 scope.go:117] "RemoveContainer" containerID="a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.902027 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"12701673-c2a0-4e8a-b906-b7e61a49c224","Type":"ContainerStarted","Data":"1e38b6aead138c934fa2d14b9fabf99784f84d1320c6df7aaf284cf327d41646"} Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.931021 4708 scope.go:117] "RemoveContainer" containerID="34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800" Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.972720 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-6d5895b968-p7cts"] Feb 27 17:17:42 crc kubenswrapper[4708]: I0227 17:17:42.997675 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-6d5895b968-p7cts"] Feb 27 17:17:43 crc kubenswrapper[4708]: I0227 17:17:43.036039 4708 scope.go:117] "RemoveContainer" containerID="a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec" Feb 27 17:17:43 crc kubenswrapper[4708]: E0227 17:17:43.038465 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec\": container with ID starting with a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec not found: ID does not exist" containerID="a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec" Feb 27 17:17:43 crc kubenswrapper[4708]: I0227 17:17:43.038504 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec"} err="failed to get container status \"a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec\": rpc error: code = NotFound desc = could not find container \"a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec\": container with ID starting with a860c53fcddd4b16f7cde366f4a71316d8852ebfff71caf473957f7dfd0291ec not found: ID does not exist" Feb 27 17:17:43 crc kubenswrapper[4708]: I0227 17:17:43.038529 4708 scope.go:117] "RemoveContainer" containerID="34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800" Feb 27 17:17:43 crc kubenswrapper[4708]: E0227 17:17:43.041235 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800\": container with ID starting with 34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800 not found: ID does not exist" containerID="34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800" Feb 27 17:17:43 crc kubenswrapper[4708]: I0227 17:17:43.041265 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800"} err="failed to get container status \"34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800\": rpc error: code = NotFound desc = could not find container \"34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800\": container with ID starting with 34baef7298c066fd3c9b5ab228daf606ec856f25a5887a64e0ba5c2cbf3e7800 not found: ID does not exist" Feb 27 17:17:44 crc kubenswrapper[4708]: I0227 17:17:44.238482 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" path="/var/lib/kubelet/pods/5177dfe3-b55f-4a39-9a6b-392796ed3084/volumes" Feb 27 17:17:44 crc kubenswrapper[4708]: I0227 17:17:44.786707 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-547f9bd6cc-98rqm" Feb 27 17:17:44 crc kubenswrapper[4708]: I0227 17:17:44.897322 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d59d57f6-95wt9"] Feb 27 17:17:44 crc kubenswrapper[4708]: I0227 17:17:44.897545 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d59d57f6-95wt9" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-api" containerID="cri-o://82b628f51d5c712d7c99021fcb12ac29169fe79378d581b1d4fc244839d3b797" gracePeriod=30 Feb 27 17:17:44 crc kubenswrapper[4708]: I0227 17:17:44.897926 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-d59d57f6-95wt9" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-httpd" containerID="cri-o://a2c31d3d0e0748b42c1e554b43420c60869f6ee7afbf8eff1040d8d11eaf06ac" gracePeriod=30 Feb 27 17:17:45 crc kubenswrapper[4708]: I0227 17:17:45.279498 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Feb 27 17:17:45 crc kubenswrapper[4708]: I0227 17:17:45.938625 4708 generic.go:334] "Generic (PLEG): container finished" podID="0b006312-c735-4397-96d7-0f742b67af82" containerID="a2c31d3d0e0748b42c1e554b43420c60869f6ee7afbf8eff1040d8d11eaf06ac" exitCode=0 Feb 27 17:17:45 crc kubenswrapper[4708]: I0227 17:17:45.938898 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d59d57f6-95wt9" event={"ID":"0b006312-c735-4397-96d7-0f742b67af82","Type":"ContainerDied","Data":"a2c31d3d0e0748b42c1e554b43420c60869f6ee7afbf8eff1040d8d11eaf06ac"} Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.776141 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-6cffdcc987-z48fb"] Feb 27 17:17:46 crc kubenswrapper[4708]: E0227 17:17:46.776520 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-log" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.776533 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-log" Feb 27 17:17:46 crc kubenswrapper[4708]: E0227 17:17:46.776572 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-api" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.776578 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-api" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.776749 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-log" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.776774 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5177dfe3-b55f-4a39-9a6b-392796ed3084" containerName="placement-api" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.777738 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.784207 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.784444 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.790902 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.794898 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6cffdcc987-z48fb"] Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841114 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-log-httpd\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841158 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-etc-swift\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841247 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn298\" (UniqueName: \"kubernetes.io/projected/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-kube-api-access-bn298\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841281 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-config-data\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841376 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-internal-tls-certs\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841471 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-run-httpd\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841604 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-public-tls-certs\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.841808 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-combined-ca-bundle\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.943917 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn298\" (UniqueName: \"kubernetes.io/projected/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-kube-api-access-bn298\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.943967 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-config-data\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.943989 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-internal-tls-certs\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.944017 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-run-httpd\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.944046 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-public-tls-certs\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.944096 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-combined-ca-bundle\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.944157 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-log-httpd\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.944172 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-etc-swift\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.945093 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-run-httpd\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.945301 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-log-httpd\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.951832 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-etc-swift\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.952324 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-public-tls-certs\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.953468 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-internal-tls-certs\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.953512 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-config-data\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.953580 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-combined-ca-bundle\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:46 crc kubenswrapper[4708]: I0227 17:17:46.961080 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn298\" (UniqueName: \"kubernetes.io/projected/6e9387a8-c996-4095-8d52-d73b5d6d1d7e-kube-api-access-bn298\") pod \"swift-proxy-6cffdcc987-z48fb\" (UID: \"6e9387a8-c996-4095-8d52-d73b5d6d1d7e\") " pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.127776 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.291904 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.292544 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-central-agent" containerID="cri-o://1c5ea90bcd9e0faced67c0695e67549dac22e2c2a5f3e6945d8727033e1384a8" gracePeriod=30 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.292715 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="proxy-httpd" containerID="cri-o://f249410a76c58a4b9bf9f3bfb31ff2585e60570ef35facfb9850e03410e4f7c9" gracePeriod=30 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.292774 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="sg-core" containerID="cri-o://da0cb185dbec473d3c1dbbd8ed39d08660d3b4e96f193b817bbfd350a49572a9" gracePeriod=30 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.292808 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-notification-agent" containerID="cri-o://6cb680491f190ad1dacd755ef7d397278a9621f1430b9537937fe38ad63885d7" gracePeriod=30 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.305631 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 17:17:47 crc kubenswrapper[4708]: W0227 17:17:47.736509 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6e9387a8_c996_4095_8d52_d73b5d6d1d7e.slice/crio-b672ca0075675a59a087e80005e9134ed6eedfd4e5284ee1a9318223666b03a0 WatchSource:0}: Error finding container b672ca0075675a59a087e80005e9134ed6eedfd4e5284ee1a9318223666b03a0: Status 404 returned error can't find the container with id b672ca0075675a59a087e80005e9134ed6eedfd4e5284ee1a9318223666b03a0 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.736777 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-6cffdcc987-z48fb"] Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.982138 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cffdcc987-z48fb" event={"ID":"6e9387a8-c996-4095-8d52-d73b5d6d1d7e","Type":"ContainerStarted","Data":"51538228c791369046e6172029769d5502cb7da3686a18ddc03b33c429ae724c"} Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.982183 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cffdcc987-z48fb" event={"ID":"6e9387a8-c996-4095-8d52-d73b5d6d1d7e","Type":"ContainerStarted","Data":"b672ca0075675a59a087e80005e9134ed6eedfd4e5284ee1a9318223666b03a0"} Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.984775 4708 generic.go:334] "Generic (PLEG): container finished" podID="cc027c08-56ee-4816-b983-daa9250ba660" containerID="f249410a76c58a4b9bf9f3bfb31ff2585e60570ef35facfb9850e03410e4f7c9" exitCode=0 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.984806 4708 generic.go:334] "Generic (PLEG): container finished" podID="cc027c08-56ee-4816-b983-daa9250ba660" containerID="da0cb185dbec473d3c1dbbd8ed39d08660d3b4e96f193b817bbfd350a49572a9" exitCode=2 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.984815 4708 generic.go:334] "Generic (PLEG): container finished" podID="cc027c08-56ee-4816-b983-daa9250ba660" containerID="1c5ea90bcd9e0faced67c0695e67549dac22e2c2a5f3e6945d8727033e1384a8" exitCode=0 Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.984836 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerDied","Data":"f249410a76c58a4b9bf9f3bfb31ff2585e60570ef35facfb9850e03410e4f7c9"} Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.984875 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerDied","Data":"da0cb185dbec473d3c1dbbd8ed39d08660d3b4e96f193b817bbfd350a49572a9"} Feb 27 17:17:47 crc kubenswrapper[4708]: I0227 17:17:47.984885 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerDied","Data":"1c5ea90bcd9e0faced67c0695e67549dac22e2c2a5f3e6945d8727033e1384a8"} Feb 27 17:17:49 crc kubenswrapper[4708]: I0227 17:17:49.017102 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-6cffdcc987-z48fb" event={"ID":"6e9387a8-c996-4095-8d52-d73b5d6d1d7e","Type":"ContainerStarted","Data":"ab33213760db3b6722bf050c8487f09ef78e816f63a70b87b77c20844c65600d"} Feb 27 17:17:49 crc kubenswrapper[4708]: I0227 17:17:49.017425 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:49 crc kubenswrapper[4708]: I0227 17:17:49.042258 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-6cffdcc987-z48fb" podStartSLOduration=3.042241613 podStartE2EDuration="3.042241613s" podCreationTimestamp="2026-02-27 17:17:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:49.040869625 +0000 UTC m=+1467.556667212" watchObservedRunningTime="2026-02-27 17:17:49.042241613 +0000 UTC m=+1467.558039200" Feb 27 17:17:50 crc kubenswrapper[4708]: I0227 17:17:50.027435 4708 generic.go:334] "Generic (PLEG): container finished" podID="cc027c08-56ee-4816-b983-daa9250ba660" containerID="6cb680491f190ad1dacd755ef7d397278a9621f1430b9537937fe38ad63885d7" exitCode=0 Feb 27 17:17:50 crc kubenswrapper[4708]: I0227 17:17:50.027514 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerDied","Data":"6cb680491f190ad1dacd755ef7d397278a9621f1430b9537937fe38ad63885d7"} Feb 27 17:17:50 crc kubenswrapper[4708]: I0227 17:17:50.028656 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:51 crc kubenswrapper[4708]: I0227 17:17:51.959161 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-lnkws"] Feb 27 17:17:51 crc kubenswrapper[4708]: I0227 17:17:51.960638 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:51 crc kubenswrapper[4708]: I0227 17:17:51.971109 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-d201-account-create-update-gjhmj"] Feb 27 17:17:51 crc kubenswrapper[4708]: I0227 17:17:51.972530 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:51 crc kubenswrapper[4708]: I0227 17:17:51.975877 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Feb 27 17:17:51 crc kubenswrapper[4708]: I0227 17:17:51.990967 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lnkws"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.037955 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d201-account-create-update-gjhmj"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.055998 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfsl\" (UniqueName: \"kubernetes.io/projected/6461de7d-1631-4115-becf-c90470540a61-kube-api-access-qrfsl\") pod \"nova-api-db-create-lnkws\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.056039 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6461de7d-1631-4115-becf-c90470540a61-operator-scripts\") pod \"nova-api-db-create-lnkws\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.056111 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lnk\" (UniqueName: \"kubernetes.io/projected/babefe61-6400-45bd-9c1a-2a20c9e0745b-kube-api-access-q5lnk\") pod \"nova-api-d201-account-create-update-gjhmj\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.056171 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babefe61-6400-45bd-9c1a-2a20c9e0745b-operator-scripts\") pod \"nova-api-d201-account-create-update-gjhmj\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.158839 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lnk\" (UniqueName: \"kubernetes.io/projected/babefe61-6400-45bd-9c1a-2a20c9e0745b-kube-api-access-q5lnk\") pod \"nova-api-d201-account-create-update-gjhmj\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.159202 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babefe61-6400-45bd-9c1a-2a20c9e0745b-operator-scripts\") pod \"nova-api-d201-account-create-update-gjhmj\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.159454 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfsl\" (UniqueName: \"kubernetes.io/projected/6461de7d-1631-4115-becf-c90470540a61-kube-api-access-qrfsl\") pod \"nova-api-db-create-lnkws\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.159548 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6461de7d-1631-4115-becf-c90470540a61-operator-scripts\") pod \"nova-api-db-create-lnkws\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.160877 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6461de7d-1631-4115-becf-c90470540a61-operator-scripts\") pod \"nova-api-db-create-lnkws\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.161481 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babefe61-6400-45bd-9c1a-2a20c9e0745b-operator-scripts\") pod \"nova-api-d201-account-create-update-gjhmj\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.162256 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-xrqrb"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.173664 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.175452 4708 generic.go:334] "Generic (PLEG): container finished" podID="0b006312-c735-4397-96d7-0f742b67af82" containerID="82b628f51d5c712d7c99021fcb12ac29169fe79378d581b1d4fc244839d3b797" exitCode=0 Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.175491 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d59d57f6-95wt9" event={"ID":"0b006312-c735-4397-96d7-0f742b67af82","Type":"ContainerDied","Data":"82b628f51d5c712d7c99021fcb12ac29169fe79378d581b1d4fc244839d3b797"} Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.196582 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfsl\" (UniqueName: \"kubernetes.io/projected/6461de7d-1631-4115-becf-c90470540a61-kube-api-access-qrfsl\") pod \"nova-api-db-create-lnkws\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.206443 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lnk\" (UniqueName: \"kubernetes.io/projected/babefe61-6400-45bd-9c1a-2a20c9e0745b-kube-api-access-q5lnk\") pod \"nova-api-d201-account-create-update-gjhmj\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.212914 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xrqrb"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.280395 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lnkws" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.292745 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.334886 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8912-account-create-update-8crv5"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.336157 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8912-account-create-update-8crv5"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.336182 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-ds9xz"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.337036 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.337747 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.340767 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.346920 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ds9xz"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.364006 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/018ebe44-d885-4630-be79-a1dd5dbc46ae-operator-scripts\") pod \"nova-cell0-db-create-xrqrb\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.371143 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtxz5\" (UniqueName: \"kubernetes.io/projected/018ebe44-d885-4630-be79-a1dd5dbc46ae-kube-api-access-vtxz5\") pod \"nova-cell0-db-create-xrqrb\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.412490 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.475026 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdhdt\" (UniqueName: \"kubernetes.io/projected/eb8e6804-81dd-4862-af76-3015e030b84d-kube-api-access-gdhdt\") pod \"nova-cell0-8912-account-create-update-8crv5\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.475122 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8e6804-81dd-4862-af76-3015e030b84d-operator-scripts\") pod \"nova-cell0-8912-account-create-update-8crv5\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.475203 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/018ebe44-d885-4630-be79-a1dd5dbc46ae-operator-scripts\") pod \"nova-cell0-db-create-xrqrb\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.475234 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9f7x4\" (UniqueName: \"kubernetes.io/projected/7b144171-78f3-46fd-ad40-aafb289868d5-kube-api-access-9f7x4\") pod \"nova-cell1-db-create-ds9xz\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.475274 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b144171-78f3-46fd-ad40-aafb289868d5-operator-scripts\") pod \"nova-cell1-db-create-ds9xz\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.475310 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtxz5\" (UniqueName: \"kubernetes.io/projected/018ebe44-d885-4630-be79-a1dd5dbc46ae-kube-api-access-vtxz5\") pod \"nova-cell0-db-create-xrqrb\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.476218 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/018ebe44-d885-4630-be79-a1dd5dbc46ae-operator-scripts\") pod \"nova-cell0-db-create-xrqrb\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.486643 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-43f3-account-create-update-92rv7"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.488317 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.492344 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.494149 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtxz5\" (UniqueName: \"kubernetes.io/projected/018ebe44-d885-4630-be79-a1dd5dbc46ae-kube-api-access-vtxz5\") pod \"nova-cell0-db-create-xrqrb\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.500336 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43f3-account-create-update-92rv7"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.578660 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8e6804-81dd-4862-af76-3015e030b84d-operator-scripts\") pod \"nova-cell0-8912-account-create-update-8crv5\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.578830 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9f7x4\" (UniqueName: \"kubernetes.io/projected/7b144171-78f3-46fd-ad40-aafb289868d5-kube-api-access-9f7x4\") pod \"nova-cell1-db-create-ds9xz\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.578988 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b144171-78f3-46fd-ad40-aafb289868d5-operator-scripts\") pod \"nova-cell1-db-create-ds9xz\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.579163 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdhdt\" (UniqueName: \"kubernetes.io/projected/eb8e6804-81dd-4862-af76-3015e030b84d-kube-api-access-gdhdt\") pod \"nova-cell0-8912-account-create-update-8crv5\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.579456 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8e6804-81dd-4862-af76-3015e030b84d-operator-scripts\") pod \"nova-cell0-8912-account-create-update-8crv5\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.580652 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b144171-78f3-46fd-ad40-aafb289868d5-operator-scripts\") pod \"nova-cell1-db-create-ds9xz\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.599713 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdhdt\" (UniqueName: \"kubernetes.io/projected/eb8e6804-81dd-4862-af76-3015e030b84d-kube-api-access-gdhdt\") pod \"nova-cell0-8912-account-create-update-8crv5\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.601923 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9f7x4\" (UniqueName: \"kubernetes.io/projected/7b144171-78f3-46fd-ad40-aafb289868d5-kube-api-access-9f7x4\") pod \"nova-cell1-db-create-ds9xz\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.634892 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.682021 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5nh4\" (UniqueName: \"kubernetes.io/projected/4d9181e3-1fa3-4039-ba55-0462c9243351-kube-api-access-z5nh4\") pod \"nova-cell1-43f3-account-create-update-92rv7\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.682084 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9181e3-1fa3-4039-ba55-0462c9243351-operator-scripts\") pod \"nova-cell1-43f3-account-create-update-92rv7\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.708473 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.712551 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.783594 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9181e3-1fa3-4039-ba55-0462c9243351-operator-scripts\") pod \"nova-cell1-43f3-account-create-update-92rv7\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.783795 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5nh4\" (UniqueName: \"kubernetes.io/projected/4d9181e3-1fa3-4039-ba55-0462c9243351-kube-api-access-z5nh4\") pod \"nova-cell1-43f3-account-create-update-92rv7\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.784352 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9181e3-1fa3-4039-ba55-0462c9243351-operator-scripts\") pod \"nova-cell1-43f3-account-create-update-92rv7\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.805572 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5nh4\" (UniqueName: \"kubernetes.io/projected/4d9181e3-1fa3-4039-ba55-0462c9243351-kube-api-access-z5nh4\") pod \"nova-cell1-43f3-account-create-update-92rv7\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.862785 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.974693 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.974946 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-log" containerID="cri-o://fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c" gracePeriod=30 Feb 27 17:17:52 crc kubenswrapper[4708]: I0227 17:17:52.975025 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-httpd" containerID="cri-o://fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0" gracePeriod=30 Feb 27 17:17:53 crc kubenswrapper[4708]: I0227 17:17:53.195161 4708 generic.go:334] "Generic (PLEG): container finished" podID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerID="fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c" exitCode=143 Feb 27 17:17:53 crc kubenswrapper[4708]: I0227 17:17:53.195201 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134","Type":"ContainerDied","Data":"fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c"} Feb 27 17:17:53 crc kubenswrapper[4708]: I0227 17:17:53.482805 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.193:3000/\": dial tcp 10.217.0.193:3000: connect: connection refused" Feb 27 17:17:54 crc kubenswrapper[4708]: I0227 17:17:54.065680 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:17:54 crc kubenswrapper[4708]: I0227 17:17:54.065962 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-log" containerID="cri-o://c0ed96637e848a67aa41fb01c560f7c5d9659c8953c083017454fd907b1a3a07" gracePeriod=30 Feb 27 17:17:54 crc kubenswrapper[4708]: I0227 17:17:54.066030 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-httpd" containerID="cri-o://24738811b9ec3e9321ef8fc2690e4119c5c5b9e5efa38ce2493c447cbc025390" gracePeriod=30 Feb 27 17:17:54 crc kubenswrapper[4708]: I0227 17:17:54.078547 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/glance-default-internal-api-0" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.178:9292/healthcheck\": EOF" Feb 27 17:17:54 crc kubenswrapper[4708]: I0227 17:17:54.205355 4708 generic.go:334] "Generic (PLEG): container finished" podID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerID="c0ed96637e848a67aa41fb01c560f7c5d9659c8953c083017454fd907b1a3a07" exitCode=143 Feb 27 17:17:54 crc kubenswrapper[4708]: I0227 17:17:54.205397 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6d082cd-70c3-4ee1-9675-294347882c7d","Type":"ContainerDied","Data":"c0ed96637e848a67aa41fb01c560f7c5d9659c8953c083017454fd907b1a3a07"} Feb 27 17:17:55 crc kubenswrapper[4708]: I0227 17:17:55.872513 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.063079 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-combined-ca-bundle\") pod \"cc027c08-56ee-4816-b983-daa9250ba660\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.063484 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pngsd\" (UniqueName: \"kubernetes.io/projected/cc027c08-56ee-4816-b983-daa9250ba660-kube-api-access-pngsd\") pod \"cc027c08-56ee-4816-b983-daa9250ba660\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.063630 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-scripts\") pod \"cc027c08-56ee-4816-b983-daa9250ba660\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.063670 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-sg-core-conf-yaml\") pod \"cc027c08-56ee-4816-b983-daa9250ba660\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.063714 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-log-httpd\") pod \"cc027c08-56ee-4816-b983-daa9250ba660\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.063751 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-config-data\") pod \"cc027c08-56ee-4816-b983-daa9250ba660\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.063811 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-run-httpd\") pod \"cc027c08-56ee-4816-b983-daa9250ba660\" (UID: \"cc027c08-56ee-4816-b983-daa9250ba660\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.065055 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "cc027c08-56ee-4816-b983-daa9250ba660" (UID: "cc027c08-56ee-4816-b983-daa9250ba660"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.065581 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "cc027c08-56ee-4816-b983-daa9250ba660" (UID: "cc027c08-56ee-4816-b983-daa9250ba660"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.067973 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc027c08-56ee-4816-b983-daa9250ba660-kube-api-access-pngsd" (OuterVolumeSpecName: "kube-api-access-pngsd") pod "cc027c08-56ee-4816-b983-daa9250ba660" (UID: "cc027c08-56ee-4816-b983-daa9250ba660"). InnerVolumeSpecName "kube-api-access-pngsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.071392 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-scripts" (OuterVolumeSpecName: "scripts") pod "cc027c08-56ee-4816-b983-daa9250ba660" (UID: "cc027c08-56ee-4816-b983-daa9250ba660"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.102810 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "cc027c08-56ee-4816-b983-daa9250ba660" (UID: "cc027c08-56ee-4816-b983-daa9250ba660"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.156907 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cc027c08-56ee-4816-b983-daa9250ba660" (UID: "cc027c08-56ee-4816-b983-daa9250ba660"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.165630 4708 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.166738 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.166765 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pngsd\" (UniqueName: \"kubernetes.io/projected/cc027c08-56ee-4816-b983-daa9250ba660-kube-api-access-pngsd\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.166774 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.166783 4708 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.166792 4708 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/cc027c08-56ee-4816-b983-daa9250ba660-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.208822 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-config-data" (OuterVolumeSpecName: "config-data") pod "cc027c08-56ee-4816-b983-daa9250ba660" (UID: "cc027c08-56ee-4816-b983-daa9250ba660"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.248337 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.253922 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"cc027c08-56ee-4816-b983-daa9250ba660","Type":"ContainerDied","Data":"7c6478e9982d18b252bb982a0023fe74fd9b3e644b48f396ad52ca3ccc2a7153"} Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.253985 4708 scope.go:117] "RemoveContainer" containerID="f249410a76c58a4b9bf9f3bfb31ff2585e60570ef35facfb9850e03410e4f7c9" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.266270 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"12701673-c2a0-4e8a-b906-b7e61a49c224","Type":"ContainerStarted","Data":"6daa1e0ce477fca53fd1b20fad224366875bbdd56f300de1de74b9e3607ac28e"} Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.269128 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cc027c08-56ee-4816-b983-daa9250ba660-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.278797 4708 scope.go:117] "RemoveContainer" containerID="da0cb185dbec473d3c1dbbd8ed39d08660d3b4e96f193b817bbfd350a49572a9" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.280260 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-d59d57f6-95wt9" event={"ID":"0b006312-c735-4397-96d7-0f742b67af82","Type":"ContainerDied","Data":"5df52fac3c06a8ed094eb5a143e6d967e7fbf60188c8d864cd540cf067521d85"} Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.280302 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5df52fac3c06a8ed094eb5a143e6d967e7fbf60188c8d864cd540cf067521d85" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.300974 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.315356 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.5678879759999997 podStartE2EDuration="15.315342018s" podCreationTimestamp="2026-02-27 17:17:41 +0000 UTC" firstStartedPulling="2026-02-27 17:17:42.834054708 +0000 UTC m=+1461.349852295" lastFinishedPulling="2026-02-27 17:17:55.58150875 +0000 UTC m=+1474.097306337" observedRunningTime="2026-02-27 17:17:56.283012929 +0000 UTC m=+1474.798810516" watchObservedRunningTime="2026-02-27 17:17:56.315342018 +0000 UTC m=+1474.831139605" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.324767 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.334154 4708 scope.go:117] "RemoveContainer" containerID="6cb680491f190ad1dacd755ef7d397278a9621f1430b9537937fe38ad63885d7" Feb 27 17:17:56 crc kubenswrapper[4708]: E0227 17:17:56.363750 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695dc1d6_e0a6_4f40_b7aa_af1c5f49f134.slice/crio-conmon-fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod695dc1d6_e0a6_4f40_b7aa_af1c5f49f134.slice/crio-fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcc027c08_56ee_4816_b983_daa9250ba660.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.364448 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398154 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:56 crc kubenswrapper[4708]: E0227 17:17:56.398543 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-httpd" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398560 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-httpd" Feb 27 17:17:56 crc kubenswrapper[4708]: E0227 17:17:56.398584 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="sg-core" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398591 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="sg-core" Feb 27 17:17:56 crc kubenswrapper[4708]: E0227 17:17:56.398601 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-central-agent" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398625 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-central-agent" Feb 27 17:17:56 crc kubenswrapper[4708]: E0227 17:17:56.398638 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="proxy-httpd" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398644 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="proxy-httpd" Feb 27 17:17:56 crc kubenswrapper[4708]: E0227 17:17:56.398719 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-api" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398726 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-api" Feb 27 17:17:56 crc kubenswrapper[4708]: E0227 17:17:56.398749 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-notification-agent" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398755 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-notification-agent" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398956 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-central-agent" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398969 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="sg-core" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398976 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-httpd" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.398984 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b006312-c735-4397-96d7-0f742b67af82" containerName="neutron-api" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.399000 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="proxy-httpd" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.399016 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc027c08-56ee-4816-b983-daa9250ba660" containerName="ceilometer-notification-agent" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.400798 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.408059 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.408318 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.438342 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.440597 4708 scope.go:117] "RemoveContainer" containerID="1c5ea90bcd9e0faced67c0695e67549dac22e2c2a5f3e6945d8727033e1384a8" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.472348 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-ovndb-tls-certs\") pod \"0b006312-c735-4397-96d7-0f742b67af82\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.472639 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-httpd-config\") pod \"0b006312-c735-4397-96d7-0f742b67af82\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.472709 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-combined-ca-bundle\") pod \"0b006312-c735-4397-96d7-0f742b67af82\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.472740 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvhv5\" (UniqueName: \"kubernetes.io/projected/0b006312-c735-4397-96d7-0f742b67af82-kube-api-access-cvhv5\") pod \"0b006312-c735-4397-96d7-0f742b67af82\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.472784 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-config\") pod \"0b006312-c735-4397-96d7-0f742b67af82\" (UID: \"0b006312-c735-4397-96d7-0f742b67af82\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.485962 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "0b006312-c735-4397-96d7-0f742b67af82" (UID: "0b006312-c735-4397-96d7-0f742b67af82"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.488923 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b006312-c735-4397-96d7-0f742b67af82-kube-api-access-cvhv5" (OuterVolumeSpecName: "kube-api-access-cvhv5") pod "0b006312-c735-4397-96d7-0f742b67af82" (UID: "0b006312-c735-4397-96d7-0f742b67af82"). InnerVolumeSpecName "kube-api-access-cvhv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.565125 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-config" (OuterVolumeSpecName: "config") pod "0b006312-c735-4397-96d7-0f742b67af82" (UID: "0b006312-c735-4397-96d7-0f742b67af82"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.571405 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0b006312-c735-4397-96d7-0f742b67af82" (UID: "0b006312-c735-4397-96d7-0f742b67af82"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.582778 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-log-httpd\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.582882 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-scripts\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.582950 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-run-httpd\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.582983 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.583071 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.583302 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflfp\" (UniqueName: \"kubernetes.io/projected/d158057b-69db-47c4-8361-17ceba3ede55-kube-api-access-tflfp\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.583549 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-config-data\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.583667 4708 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-httpd-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.583682 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.583693 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvhv5\" (UniqueName: \"kubernetes.io/projected/0b006312-c735-4397-96d7-0f742b67af82-kube-api-access-cvhv5\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.583703 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.593989 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "0b006312-c735-4397-96d7-0f742b67af82" (UID: "0b006312-c735-4397-96d7-0f742b67af82"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.686236 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-log-httpd\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.686996 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-log-httpd\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.689630 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-scripts\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.689707 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-run-httpd\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.689748 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.689821 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.690035 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tflfp\" (UniqueName: \"kubernetes.io/projected/d158057b-69db-47c4-8361-17ceba3ede55-kube-api-access-tflfp\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.690204 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-run-httpd\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.690222 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-config-data\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.690421 4708 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/0b006312-c735-4397-96d7-0f742b67af82-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.697054 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-scripts\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.698808 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.700293 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-config-data\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.702361 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.713878 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tflfp\" (UniqueName: \"kubernetes.io/projected/d158057b-69db-47c4-8361-17ceba3ede55-kube-api-access-tflfp\") pod \"ceilometer-0\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.748023 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.790404 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.896164 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-public-tls-certs\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.896345 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.897230 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-logs\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.897357 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-config-data\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.897440 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bj79r\" (UniqueName: \"kubernetes.io/projected/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-kube-api-access-bj79r\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.897605 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-httpd-run\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.897650 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-combined-ca-bundle\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.897673 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-scripts\") pod \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\" (UID: \"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134\") " Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.899136 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.899288 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-logs" (OuterVolumeSpecName: "logs") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.903789 4708 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.903811 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.923284 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-ds9xz"] Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.923788 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-kube-api-access-bj79r" (OuterVolumeSpecName: "kube-api-access-bj79r") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "kube-api-access-bj79r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.924484 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-scripts" (OuterVolumeSpecName: "scripts") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.924742 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e" (OuterVolumeSpecName: "glance") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "pvc-391ff05f-bf42-4781-89df-7a3aa774575e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.942367 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-d201-account-create-update-gjhmj"] Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.967826 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.974777 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:56 crc kubenswrapper[4708]: I0227 17:17:56.982642 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8912-account-create-update-8crv5"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.014230 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-lnkws"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.026081 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-43f3-account-create-update-92rv7"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.026223 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.026242 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.026253 4708 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.026288 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") on node \"crc\" " Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.026300 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bj79r\" (UniqueName: \"kubernetes.io/projected/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-kube-api-access-bj79r\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.027858 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-xrqrb"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.060226 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-config-data" (OuterVolumeSpecName: "config-data") pod "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" (UID: "695dc1d6-e0a6-4f40-b7aa-af1c5f49f134"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.062078 4708 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.062207 4708 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-391ff05f-bf42-4781-89df-7a3aa774575e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e") on node "crc" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.127715 4708 reconciler_common.go:293] "Volume detached for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.127744 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.134896 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-6cffdcc987-z48fb" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.304368 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d201-account-create-update-gjhmj" event={"ID":"babefe61-6400-45bd-9c1a-2a20c9e0745b","Type":"ContainerStarted","Data":"a559c9472453fc7aebf081709d40ae47f7ec3655d6e88c63ecffa1c9ef143cb8"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.304435 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d201-account-create-update-gjhmj" event={"ID":"babefe61-6400-45bd-9c1a-2a20c9e0745b","Type":"ContainerStarted","Data":"1965ef6f88aaa460109691d2028f07cba5b1a41235e1457ab9390ee66c5810d1"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.312539 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8912-account-create-update-8crv5" event={"ID":"eb8e6804-81dd-4862-af76-3015e030b84d","Type":"ContainerStarted","Data":"d2624b38985ecd72f704359a759929ae707908e5ba4429466cb49b8534939eb8"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.315200 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.329530 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xrqrb" event={"ID":"018ebe44-d885-4630-be79-a1dd5dbc46ae","Type":"ContainerStarted","Data":"2dc86500fae1f45e8c998ee84826b65fbd004769d96981bd65d3fa5915a539cf"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.339375 4708 generic.go:334] "Generic (PLEG): container finished" podID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerID="fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0" exitCode=0 Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.339597 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.339645 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134","Type":"ContainerDied","Data":"fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.339684 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"695dc1d6-e0a6-4f40-b7aa-af1c5f49f134","Type":"ContainerDied","Data":"3a318d4a0fb2276f84e28162129a3d0e7b994590a79831e98b0237edc4b523bc"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.339705 4708 scope.go:117] "RemoveContainer" containerID="fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.353512 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ds9xz" event={"ID":"7b144171-78f3-46fd-ad40-aafb289868d5","Type":"ContainerStarted","Data":"f0304dad2ca2f5690acb327ca6939a603ab4c271be52886967cf2eac15f13efe"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.366895 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lnkws" event={"ID":"6461de7d-1631-4115-becf-c90470540a61","Type":"ContainerStarted","Data":"d4d8ec512808d89dc8dac8be5ab1b56243c83443a4e2fe7b8dbf13dba6f16a3f"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.370112 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43f3-account-create-update-92rv7" event={"ID":"4d9181e3-1fa3-4039-ba55-0462c9243351","Type":"ContainerStarted","Data":"2711e8ab3c910c41988e953eae6022d337f0d3cc0ed0da280342a8bf55075f65"} Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.377684 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-d201-account-create-update-gjhmj" podStartSLOduration=6.377668664 podStartE2EDuration="6.377668664s" podCreationTimestamp="2026-02-27 17:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:57.330345735 +0000 UTC m=+1475.846143322" watchObservedRunningTime="2026-02-27 17:17:57.377668664 +0000 UTC m=+1475.893466251" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.383064 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-d59d57f6-95wt9" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.525905 4708 scope.go:117] "RemoveContainer" containerID="fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.527426 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-ds9xz" podStartSLOduration=5.527396871 podStartE2EDuration="5.527396871s" podCreationTimestamp="2026-02-27 17:17:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:17:57.385054152 +0000 UTC m=+1475.900851739" watchObservedRunningTime="2026-02-27 17:17:57.527396871 +0000 UTC m=+1476.043194458" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.531987 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.571225 4708 scope.go:117] "RemoveContainer" containerID="fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0" Feb 27 17:17:57 crc kubenswrapper[4708]: E0227 17:17:57.587057 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0\": container with ID starting with fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0 not found: ID does not exist" containerID="fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.587098 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0"} err="failed to get container status \"fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0\": rpc error: code = NotFound desc = could not find container \"fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0\": container with ID starting with fb9208558a905d1b5a4d2545dbc2e98422fa559cc60405882f6c94e1d130eda0 not found: ID does not exist" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.587117 4708 scope.go:117] "RemoveContainer" containerID="fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c" Feb 27 17:17:57 crc kubenswrapper[4708]: E0227 17:17:57.587758 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c\": container with ID starting with fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c not found: ID does not exist" containerID="fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.587776 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c"} err="failed to get container status \"fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c\": rpc error: code = NotFound desc = could not find container \"fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c\": container with ID starting with fe9feaf3fd1d7772168306173f5ad83b15bce51483d974e725ebf89632e2894c not found: ID does not exist" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.590429 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.599907 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-d59d57f6-95wt9"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.616015 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-d59d57f6-95wt9"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.633362 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:17:57 crc kubenswrapper[4708]: E0227 17:17:57.633886 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-httpd" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.633904 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-httpd" Feb 27 17:17:57 crc kubenswrapper[4708]: E0227 17:17:57.633922 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-log" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.633936 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-log" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.634123 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-log" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.634142 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" containerName="glance-httpd" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.635202 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.637537 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.637570 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.653722 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.738484 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-logs\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.738547 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.738724 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.738764 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.738981 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qg6l\" (UniqueName: \"kubernetes.io/projected/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-kube-api-access-2qg6l\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.739051 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.739172 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.739239 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.840501 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qg6l\" (UniqueName: \"kubernetes.io/projected/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-kube-api-access-2qg6l\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.840553 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.840617 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.841157 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.841194 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-logs\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.841199 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.841228 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.841276 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.841293 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.842223 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-logs\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.867573 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.868001 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-scripts\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.872799 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.876001 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-config-data\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:57 crc kubenswrapper[4708]: I0227 17:17:57.892604 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qg6l\" (UniqueName: \"kubernetes.io/projected/a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1-kube-api-access-2qg6l\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.009605 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.009659 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/852cd9e461d89b39e32be31d4cb707ef1d2abb65b96de01c0d2dcb097d159f7c/globalmount\"" pod="openstack/glance-default-external-api-0" Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.100425 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-391ff05f-bf42-4781-89df-7a3aa774575e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-391ff05f-bf42-4781-89df-7a3aa774575e\") pod \"glance-default-external-api-0\" (UID: \"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1\") " pod="openstack/glance-default-external-api-0" Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.237577 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b006312-c735-4397-96d7-0f742b67af82" path="/var/lib/kubelet/pods/0b006312-c735-4397-96d7-0f742b67af82/volumes" Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.238399 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="695dc1d6-e0a6-4f40-b7aa-af1c5f49f134" path="/var/lib/kubelet/pods/695dc1d6-e0a6-4f40-b7aa-af1c5f49f134/volumes" Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.238991 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc027c08-56ee-4816-b983-daa9250ba660" path="/var/lib/kubelet/pods/cc027c08-56ee-4816-b983-daa9250ba660/volumes" Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.262112 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.404948 4708 generic.go:334] "Generic (PLEG): container finished" podID="4d9181e3-1fa3-4039-ba55-0462c9243351" containerID="b5cba25021303dbecb07161c3d8f8ddae573496c8a97fd7d8b635c839a5d6ae8" exitCode=0 Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.405212 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43f3-account-create-update-92rv7" event={"ID":"4d9181e3-1fa3-4039-ba55-0462c9243351","Type":"ContainerDied","Data":"b5cba25021303dbecb07161c3d8f8ddae573496c8a97fd7d8b635c839a5d6ae8"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.408789 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerStarted","Data":"460b651d793f3c91cae760c0edb1b0a5f7a3a7025aa2af33d22c631a4d561d5b"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.408810 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerStarted","Data":"a8b66da4f8aef7ae70f4911d9efe25e9ae5a20fa3babec08f567f91cfa7b54a6"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.410575 4708 generic.go:334] "Generic (PLEG): container finished" podID="babefe61-6400-45bd-9c1a-2a20c9e0745b" containerID="a559c9472453fc7aebf081709d40ae47f7ec3655d6e88c63ecffa1c9ef143cb8" exitCode=0 Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.410619 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d201-account-create-update-gjhmj" event={"ID":"babefe61-6400-45bd-9c1a-2a20c9e0745b","Type":"ContainerDied","Data":"a559c9472453fc7aebf081709d40ae47f7ec3655d6e88c63ecffa1c9ef143cb8"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.439896 4708 generic.go:334] "Generic (PLEG): container finished" podID="eb8e6804-81dd-4862-af76-3015e030b84d" containerID="1bb6404ed1725ca80a52e5e7a01e4d3fc71aacd4aab6792a4bc0dca4ce5bf496" exitCode=0 Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.440012 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8912-account-create-update-8crv5" event={"ID":"eb8e6804-81dd-4862-af76-3015e030b84d","Type":"ContainerDied","Data":"1bb6404ed1725ca80a52e5e7a01e4d3fc71aacd4aab6792a4bc0dca4ce5bf496"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.443315 4708 generic.go:334] "Generic (PLEG): container finished" podID="018ebe44-d885-4630-be79-a1dd5dbc46ae" containerID="a1154e8a71e56f614329082eb40d25bb529e42fb4f0e005e812c27fb899b4386" exitCode=0 Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.443446 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xrqrb" event={"ID":"018ebe44-d885-4630-be79-a1dd5dbc46ae","Type":"ContainerDied","Data":"a1154e8a71e56f614329082eb40d25bb529e42fb4f0e005e812c27fb899b4386"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.447729 4708 generic.go:334] "Generic (PLEG): container finished" podID="7b144171-78f3-46fd-ad40-aafb289868d5" containerID="484af3da40001ba3c31e8ab1ac6f6bf369cd6dd878bce80437638baca89aa3ac" exitCode=0 Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.447815 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ds9xz" event={"ID":"7b144171-78f3-46fd-ad40-aafb289868d5","Type":"ContainerDied","Data":"484af3da40001ba3c31e8ab1ac6f6bf369cd6dd878bce80437638baca89aa3ac"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.450471 4708 generic.go:334] "Generic (PLEG): container finished" podID="6461de7d-1631-4115-becf-c90470540a61" containerID="00d1f54549468c77ba53bb980dc32ce0d08e537e3eee0e33d2d6a60ea8cb3067" exitCode=0 Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.450536 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lnkws" event={"ID":"6461de7d-1631-4115-becf-c90470540a61","Type":"ContainerDied","Data":"00d1f54549468c77ba53bb980dc32ce0d08e537e3eee0e33d2d6a60ea8cb3067"} Feb 27 17:17:58 crc kubenswrapper[4708]: I0227 17:17:58.785438 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Feb 27 17:17:58 crc kubenswrapper[4708]: W0227 17:17:58.800572 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda15bcd78_5c20_4ab0_ab64_e7b7e65cf4d1.slice/crio-f263733a1f0011aa36b606aff69f1fcfd47bac084b84fbfb99316cad9aac0296 WatchSource:0}: Error finding container f263733a1f0011aa36b606aff69f1fcfd47bac084b84fbfb99316cad9aac0296: Status 404 returned error can't find the container with id f263733a1f0011aa36b606aff69f1fcfd47bac084b84fbfb99316cad9aac0296 Feb 27 17:17:59 crc kubenswrapper[4708]: I0227 17:17:59.459816 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1","Type":"ContainerStarted","Data":"f263733a1f0011aa36b606aff69f1fcfd47bac084b84fbfb99316cad9aac0296"} Feb 27 17:17:59 crc kubenswrapper[4708]: I0227 17:17:59.936890 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.091524 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9181e3-1fa3-4039-ba55-0462c9243351-operator-scripts\") pod \"4d9181e3-1fa3-4039-ba55-0462c9243351\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.091677 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5nh4\" (UniqueName: \"kubernetes.io/projected/4d9181e3-1fa3-4039-ba55-0462c9243351-kube-api-access-z5nh4\") pod \"4d9181e3-1fa3-4039-ba55-0462c9243351\" (UID: \"4d9181e3-1fa3-4039-ba55-0462c9243351\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.092250 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d9181e3-1fa3-4039-ba55-0462c9243351-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d9181e3-1fa3-4039-ba55-0462c9243351" (UID: "4d9181e3-1fa3-4039-ba55-0462c9243351"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.099722 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d9181e3-1fa3-4039-ba55-0462c9243351-kube-api-access-z5nh4" (OuterVolumeSpecName: "kube-api-access-z5nh4") pod "4d9181e3-1fa3-4039-ba55-0462c9243351" (UID: "4d9181e3-1fa3-4039-ba55-0462c9243351"). InnerVolumeSpecName "kube-api-access-z5nh4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.171401 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536878-fv4q5"] Feb 27 17:18:00 crc kubenswrapper[4708]: E0227 17:18:00.171885 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d9181e3-1fa3-4039-ba55-0462c9243351" containerName="mariadb-account-create-update" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.171898 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d9181e3-1fa3-4039-ba55-0462c9243351" containerName="mariadb-account-create-update" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.172091 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d9181e3-1fa3-4039-ba55-0462c9243351" containerName="mariadb-account-create-update" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.173136 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-fv4q5" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.177132 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.177421 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.177523 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.184770 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-fv4q5"] Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.210756 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d9181e3-1fa3-4039-ba55-0462c9243351-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.210781 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5nh4\" (UniqueName: \"kubernetes.io/projected/4d9181e3-1fa3-4039-ba55-0462c9243351-kube-api-access-z5nh4\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.259408 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.287621 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lnkws" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.292828 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.308770 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.312358 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w5g6\" (UniqueName: \"kubernetes.io/projected/da11d788-6fb8-42b3-bdcd-4228dde954c3-kube-api-access-6w5g6\") pod \"auto-csr-approver-29536878-fv4q5\" (UID: \"da11d788-6fb8-42b3-bdcd-4228dde954c3\") " pod="openshift-infra/auto-csr-approver-29536878-fv4q5" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.413655 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b144171-78f3-46fd-ad40-aafb289868d5-operator-scripts\") pod \"7b144171-78f3-46fd-ad40-aafb289868d5\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.413715 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9f7x4\" (UniqueName: \"kubernetes.io/projected/7b144171-78f3-46fd-ad40-aafb289868d5-kube-api-access-9f7x4\") pod \"7b144171-78f3-46fd-ad40-aafb289868d5\" (UID: \"7b144171-78f3-46fd-ad40-aafb289868d5\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.413788 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrfsl\" (UniqueName: \"kubernetes.io/projected/6461de7d-1631-4115-becf-c90470540a61-kube-api-access-qrfsl\") pod \"6461de7d-1631-4115-becf-c90470540a61\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.413983 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/018ebe44-d885-4630-be79-a1dd5dbc46ae-operator-scripts\") pod \"018ebe44-d885-4630-be79-a1dd5dbc46ae\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.414020 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6461de7d-1631-4115-becf-c90470540a61-operator-scripts\") pod \"6461de7d-1631-4115-becf-c90470540a61\" (UID: \"6461de7d-1631-4115-becf-c90470540a61\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.414053 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtxz5\" (UniqueName: \"kubernetes.io/projected/018ebe44-d885-4630-be79-a1dd5dbc46ae-kube-api-access-vtxz5\") pod \"018ebe44-d885-4630-be79-a1dd5dbc46ae\" (UID: \"018ebe44-d885-4630-be79-a1dd5dbc46ae\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.414087 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5lnk\" (UniqueName: \"kubernetes.io/projected/babefe61-6400-45bd-9c1a-2a20c9e0745b-kube-api-access-q5lnk\") pod \"babefe61-6400-45bd-9c1a-2a20c9e0745b\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.414126 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babefe61-6400-45bd-9c1a-2a20c9e0745b-operator-scripts\") pod \"babefe61-6400-45bd-9c1a-2a20c9e0745b\" (UID: \"babefe61-6400-45bd-9c1a-2a20c9e0745b\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.414429 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w5g6\" (UniqueName: \"kubernetes.io/projected/da11d788-6fb8-42b3-bdcd-4228dde954c3-kube-api-access-6w5g6\") pod \"auto-csr-approver-29536878-fv4q5\" (UID: \"da11d788-6fb8-42b3-bdcd-4228dde954c3\") " pod="openshift-infra/auto-csr-approver-29536878-fv4q5" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.415542 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/018ebe44-d885-4630-be79-a1dd5dbc46ae-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "018ebe44-d885-4630-be79-a1dd5dbc46ae" (UID: "018ebe44-d885-4630-be79-a1dd5dbc46ae"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.415534 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6461de7d-1631-4115-becf-c90470540a61-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6461de7d-1631-4115-becf-c90470540a61" (UID: "6461de7d-1631-4115-becf-c90470540a61"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.415828 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7b144171-78f3-46fd-ad40-aafb289868d5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "7b144171-78f3-46fd-ad40-aafb289868d5" (UID: "7b144171-78f3-46fd-ad40-aafb289868d5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.416263 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/babefe61-6400-45bd-9c1a-2a20c9e0745b-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "babefe61-6400-45bd-9c1a-2a20c9e0745b" (UID: "babefe61-6400-45bd-9c1a-2a20c9e0745b"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.419601 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6461de7d-1631-4115-becf-c90470540a61-kube-api-access-qrfsl" (OuterVolumeSpecName: "kube-api-access-qrfsl") pod "6461de7d-1631-4115-becf-c90470540a61" (UID: "6461de7d-1631-4115-becf-c90470540a61"). InnerVolumeSpecName "kube-api-access-qrfsl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.419877 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/018ebe44-d885-4630-be79-a1dd5dbc46ae-kube-api-access-vtxz5" (OuterVolumeSpecName: "kube-api-access-vtxz5") pod "018ebe44-d885-4630-be79-a1dd5dbc46ae" (UID: "018ebe44-d885-4630-be79-a1dd5dbc46ae"). InnerVolumeSpecName "kube-api-access-vtxz5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.420011 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7b144171-78f3-46fd-ad40-aafb289868d5-kube-api-access-9f7x4" (OuterVolumeSpecName: "kube-api-access-9f7x4") pod "7b144171-78f3-46fd-ad40-aafb289868d5" (UID: "7b144171-78f3-46fd-ad40-aafb289868d5"). InnerVolumeSpecName "kube-api-access-9f7x4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.432035 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/babefe61-6400-45bd-9c1a-2a20c9e0745b-kube-api-access-q5lnk" (OuterVolumeSpecName: "kube-api-access-q5lnk") pod "babefe61-6400-45bd-9c1a-2a20c9e0745b" (UID: "babefe61-6400-45bd-9c1a-2a20c9e0745b"). InnerVolumeSpecName "kube-api-access-q5lnk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.443466 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w5g6\" (UniqueName: \"kubernetes.io/projected/da11d788-6fb8-42b3-bdcd-4228dde954c3-kube-api-access-6w5g6\") pod \"auto-csr-approver-29536878-fv4q5\" (UID: \"da11d788-6fb8-42b3-bdcd-4228dde954c3\") " pod="openshift-infra/auto-csr-approver-29536878-fv4q5" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.485915 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.486474 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-43f3-account-create-update-92rv7" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.486505 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-43f3-account-create-update-92rv7" event={"ID":"4d9181e3-1fa3-4039-ba55-0462c9243351","Type":"ContainerDied","Data":"2711e8ab3c910c41988e953eae6022d337f0d3cc0ed0da280342a8bf55075f65"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.486541 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2711e8ab3c910c41988e953eae6022d337f0d3cc0ed0da280342a8bf55075f65" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.488362 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1","Type":"ContainerStarted","Data":"e04901768e3c403c49320e76f9f26a6affd1f72e243d6ab782f1057d6a236b68"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.494077 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerStarted","Data":"530584b49d7d7a5b4eccc55282fad1634c2c8ffccc12cf36c53ea4d3db030e3e"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516474 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrfsl\" (UniqueName: \"kubernetes.io/projected/6461de7d-1631-4115-becf-c90470540a61-kube-api-access-qrfsl\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516502 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/018ebe44-d885-4630-be79-a1dd5dbc46ae-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516511 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6461de7d-1631-4115-becf-c90470540a61-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516520 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtxz5\" (UniqueName: \"kubernetes.io/projected/018ebe44-d885-4630-be79-a1dd5dbc46ae-kube-api-access-vtxz5\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516528 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5lnk\" (UniqueName: \"kubernetes.io/projected/babefe61-6400-45bd-9c1a-2a20c9e0745b-kube-api-access-q5lnk\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516536 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/babefe61-6400-45bd-9c1a-2a20c9e0745b-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516545 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/7b144171-78f3-46fd-ad40-aafb289868d5-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.516553 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9f7x4\" (UniqueName: \"kubernetes.io/projected/7b144171-78f3-46fd-ad40-aafb289868d5-kube-api-access-9f7x4\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.521975 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-d201-account-create-update-gjhmj" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.522020 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-d201-account-create-update-gjhmj" event={"ID":"babefe61-6400-45bd-9c1a-2a20c9e0745b","Type":"ContainerDied","Data":"1965ef6f88aaa460109691d2028f07cba5b1a41235e1457ab9390ee66c5810d1"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.522052 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1965ef6f88aaa460109691d2028f07cba5b1a41235e1457ab9390ee66c5810d1" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.527173 4708 generic.go:334] "Generic (PLEG): container finished" podID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerID="24738811b9ec3e9321ef8fc2690e4119c5c5b9e5efa38ce2493c447cbc025390" exitCode=0 Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.527235 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6d082cd-70c3-4ee1-9675-294347882c7d","Type":"ContainerDied","Data":"24738811b9ec3e9321ef8fc2690e4119c5c5b9e5efa38ce2493c447cbc025390"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.531046 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-xrqrb" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.531043 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-xrqrb" event={"ID":"018ebe44-d885-4630-be79-a1dd5dbc46ae","Type":"ContainerDied","Data":"2dc86500fae1f45e8c998ee84826b65fbd004769d96981bd65d3fa5915a539cf"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.531189 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2dc86500fae1f45e8c998ee84826b65fbd004769d96981bd65d3fa5915a539cf" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.531412 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.533706 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-ds9xz" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.533986 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-ds9xz" event={"ID":"7b144171-78f3-46fd-ad40-aafb289868d5","Type":"ContainerDied","Data":"f0304dad2ca2f5690acb327ca6939a603ab4c271be52886967cf2eac15f13efe"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.534012 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0304dad2ca2f5690acb327ca6939a603ab4c271be52886967cf2eac15f13efe" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.540407 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-lnkws" event={"ID":"6461de7d-1631-4115-becf-c90470540a61","Type":"ContainerDied","Data":"d4d8ec512808d89dc8dac8be5ab1b56243c83443a4e2fe7b8dbf13dba6f16a3f"} Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.540432 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4d8ec512808d89dc8dac8be5ab1b56243c83443a4e2fe7b8dbf13dba6f16a3f" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.540476 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-lnkws" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.561180 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-fv4q5" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.618017 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8e6804-81dd-4862-af76-3015e030b84d-operator-scripts\") pod \"eb8e6804-81dd-4862-af76-3015e030b84d\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.618372 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.618409 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-httpd-run\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.618534 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-logs\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.618595 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-config-data\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.618685 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tc4x\" (UniqueName: \"kubernetes.io/projected/d6d082cd-70c3-4ee1-9675-294347882c7d-kube-api-access-5tc4x\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.618784 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-internal-tls-certs\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.619047 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdhdt\" (UniqueName: \"kubernetes.io/projected/eb8e6804-81dd-4862-af76-3015e030b84d-kube-api-access-gdhdt\") pod \"eb8e6804-81dd-4862-af76-3015e030b84d\" (UID: \"eb8e6804-81dd-4862-af76-3015e030b84d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.619080 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-scripts\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.619096 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-combined-ca-bundle\") pod \"d6d082cd-70c3-4ee1-9675-294347882c7d\" (UID: \"d6d082cd-70c3-4ee1-9675-294347882c7d\") " Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.620187 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.620438 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb8e6804-81dd-4862-af76-3015e030b84d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "eb8e6804-81dd-4862-af76-3015e030b84d" (UID: "eb8e6804-81dd-4862-af76-3015e030b84d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.620642 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-logs" (OuterVolumeSpecName: "logs") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.637068 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb8e6804-81dd-4862-af76-3015e030b84d-kube-api-access-gdhdt" (OuterVolumeSpecName: "kube-api-access-gdhdt") pod "eb8e6804-81dd-4862-af76-3015e030b84d" (UID: "eb8e6804-81dd-4862-af76-3015e030b84d"). InnerVolumeSpecName "kube-api-access-gdhdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.637189 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6d082cd-70c3-4ee1-9675-294347882c7d-kube-api-access-5tc4x" (OuterVolumeSpecName: "kube-api-access-5tc4x") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "kube-api-access-5tc4x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.650052 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-scripts" (OuterVolumeSpecName: "scripts") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.689094 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.721592 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5tc4x\" (UniqueName: \"kubernetes.io/projected/d6d082cd-70c3-4ee1-9675-294347882c7d-kube-api-access-5tc4x\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.721617 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gdhdt\" (UniqueName: \"kubernetes.io/projected/eb8e6804-81dd-4862-af76-3015e030b84d-kube-api-access-gdhdt\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.721626 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.721635 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.721645 4708 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/eb8e6804-81dd-4862-af76-3015e030b84d-operator-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.721652 4708 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-httpd-run\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.721662 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6d082cd-70c3-4ee1-9675-294347882c7d-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.730009 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-config-data" (OuterVolumeSpecName: "config-data") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.824347 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.885084 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.930225 4708 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6d082cd-70c3-4ee1-9675-294347882c7d-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:00 crc kubenswrapper[4708]: I0227 17:18:00.957538 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59" (OuterVolumeSpecName: "glance") pod "d6d082cd-70c3-4ee1-9675-294347882c7d" (UID: "d6d082cd-70c3-4ee1-9675-294347882c7d"). InnerVolumeSpecName "pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.031558 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") on node \"crc\" " Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.099653 4708 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.100314 4708 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59") on node "crc" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.140084 4708 reconciler_common.go:293] "Volume detached for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.309224 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-fv4q5"] Feb 27 17:18:01 crc kubenswrapper[4708]: W0227 17:18:01.317013 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda11d788_6fb8_42b3_bdcd_4228dde954c3.slice/crio-71045bb9707ba3ede9d99822e2a2533d30b3b93378b3e525aaf5b044199baa91 WatchSource:0}: Error finding container 71045bb9707ba3ede9d99822e2a2533d30b3b93378b3e525aaf5b044199baa91: Status 404 returned error can't find the container with id 71045bb9707ba3ede9d99822e2a2533d30b3b93378b3e525aaf5b044199baa91 Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.552572 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerStarted","Data":"4849b0f7eac085b0fa6889fe5a042ff990ae5d3e248647129d269329c2c11095"} Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.553694 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8912-account-create-update-8crv5" event={"ID":"eb8e6804-81dd-4862-af76-3015e030b84d","Type":"ContainerDied","Data":"d2624b38985ecd72f704359a759929ae707908e5ba4429466cb49b8534939eb8"} Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.553712 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d2624b38985ecd72f704359a759929ae707908e5ba4429466cb49b8534939eb8" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.553774 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8912-account-create-update-8crv5" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.561025 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"d6d082cd-70c3-4ee1-9675-294347882c7d","Type":"ContainerDied","Data":"1ed0815f99cea9e28e9772e35c6f330ccc07f5e4d6be2c574f9ecff309e0b66d"} Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.561100 4708 scope.go:117] "RemoveContainer" containerID="24738811b9ec3e9321ef8fc2690e4119c5c5b9e5efa38ce2493c447cbc025390" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.561097 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.566392 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536878-fv4q5" event={"ID":"da11d788-6fb8-42b3-bdcd-4228dde954c3","Type":"ContainerStarted","Data":"71045bb9707ba3ede9d99822e2a2533d30b3b93378b3e525aaf5b044199baa91"} Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.568611 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1","Type":"ContainerStarted","Data":"bedb2d54cd11fa3cbf969d44633ac13fcabb0f93e76bb20c92b3b4ec556b70a9"} Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.587818 4708 scope.go:117] "RemoveContainer" containerID="c0ed96637e848a67aa41fb01c560f7c5d9659c8953c083017454fd907b1a3a07" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.604142 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.60412363 podStartE2EDuration="4.60412363s" podCreationTimestamp="2026-02-27 17:17:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:01.59167442 +0000 UTC m=+1480.107472007" watchObservedRunningTime="2026-02-27 17:18:01.60412363 +0000 UTC m=+1480.119921227" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.668417 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.688436 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.710656 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:18:01 crc kubenswrapper[4708]: E0227 17:18:01.711162 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7b144171-78f3-46fd-ad40-aafb289868d5" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711176 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7b144171-78f3-46fd-ad40-aafb289868d5" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: E0227 17:18:01.711191 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-log" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711197 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-log" Feb 27 17:18:01 crc kubenswrapper[4708]: E0227 17:18:01.711210 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="babefe61-6400-45bd-9c1a-2a20c9e0745b" containerName="mariadb-account-create-update" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711218 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="babefe61-6400-45bd-9c1a-2a20c9e0745b" containerName="mariadb-account-create-update" Feb 27 17:18:01 crc kubenswrapper[4708]: E0227 17:18:01.711233 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="018ebe44-d885-4630-be79-a1dd5dbc46ae" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711239 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="018ebe44-d885-4630-be79-a1dd5dbc46ae" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: E0227 17:18:01.711249 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb8e6804-81dd-4862-af76-3015e030b84d" containerName="mariadb-account-create-update" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711267 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb8e6804-81dd-4862-af76-3015e030b84d" containerName="mariadb-account-create-update" Feb 27 17:18:01 crc kubenswrapper[4708]: E0227 17:18:01.711287 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6461de7d-1631-4115-becf-c90470540a61" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711294 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6461de7d-1631-4115-becf-c90470540a61" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: E0227 17:18:01.711302 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-httpd" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711309 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-httpd" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711479 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb8e6804-81dd-4862-af76-3015e030b84d" containerName="mariadb-account-create-update" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711492 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7b144171-78f3-46fd-ad40-aafb289868d5" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711501 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="babefe61-6400-45bd-9c1a-2a20c9e0745b" containerName="mariadb-account-create-update" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711510 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="6461de7d-1631-4115-becf-c90470540a61" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711522 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="018ebe44-d885-4630-be79-a1dd5dbc46ae" containerName="mariadb-database-create" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711534 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-log" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.711551 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" containerName="glance-httpd" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.712664 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.721659 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.725471 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.725643 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865125 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d218377-bee6-44e0-a6f7-ef62a33366e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865252 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865348 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx9n5\" (UniqueName: \"kubernetes.io/projected/1d218377-bee6-44e0-a6f7-ef62a33366e0-kube-api-access-mx9n5\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865381 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865441 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d218377-bee6-44e0-a6f7-ef62a33366e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865461 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865495 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.865525 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967008 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mx9n5\" (UniqueName: \"kubernetes.io/projected/1d218377-bee6-44e0-a6f7-ef62a33366e0-kube-api-access-mx9n5\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967052 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967106 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d218377-bee6-44e0-a6f7-ef62a33366e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967127 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967146 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967174 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967195 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d218377-bee6-44e0-a6f7-ef62a33366e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967250 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.967641 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1d218377-bee6-44e0-a6f7-ef62a33366e0-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.968346 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1d218377-bee6-44e0-a6f7-ef62a33366e0-logs\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.969397 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.969425 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6cf9b68842a44daba5610601208ef38850856ede1b5f40d133ba6995034e3af2/globalmount\"" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.972819 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.974099 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.975057 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.986347 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mx9n5\" (UniqueName: \"kubernetes.io/projected/1d218377-bee6-44e0-a6f7-ef62a33366e0-kube-api-access-mx9n5\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:01 crc kubenswrapper[4708]: I0227 17:18:01.986720 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1d218377-bee6-44e0-a6f7-ef62a33366e0-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.019773 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-0cb9924d-42ef-4832-bb48-cf54f019ec59\") pod \"glance-default-internal-api-0\" (UID: \"1d218377-bee6-44e0-a6f7-ef62a33366e0\") " pod="openstack/glance-default-internal-api-0" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.046382 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.238888 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6d082cd-70c3-4ee1-9675-294347882c7d" path="/var/lib/kubelet/pods/d6d082cd-70c3-4ee1-9675-294347882c7d/volumes" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.618921 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-89gd4"] Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.627518 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.648270 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.648577 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.648635 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k2cdd" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.648877 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-89gd4"] Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.710927 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.799623 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-config-data\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.799682 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.799754 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qblq\" (UniqueName: \"kubernetes.io/projected/8b9d6fda-ab96-4cea-8fec-2c49b206d095-kube-api-access-9qblq\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.799789 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-scripts\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.901328 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-config-data\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.901383 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.901450 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qblq\" (UniqueName: \"kubernetes.io/projected/8b9d6fda-ab96-4cea-8fec-2c49b206d095-kube-api-access-9qblq\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.901484 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-scripts\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.909648 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.910433 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-config-data\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.919729 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qblq\" (UniqueName: \"kubernetes.io/projected/8b9d6fda-ab96-4cea-8fec-2c49b206d095-kube-api-access-9qblq\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.919787 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-scripts\") pod \"nova-cell0-conductor-db-sync-89gd4\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:02 crc kubenswrapper[4708]: I0227 17:18:02.974681 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:03 crc kubenswrapper[4708]: I0227 17:18:03.575913 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-89gd4"] Feb 27 17:18:03 crc kubenswrapper[4708]: I0227 17:18:03.665481 4708 generic.go:334] "Generic (PLEG): container finished" podID="da11d788-6fb8-42b3-bdcd-4228dde954c3" containerID="c12b0eb6d81db8eb422e9b052ba45dc99776f9f251467608cfd43ea1104725df" exitCode=0 Feb 27 17:18:03 crc kubenswrapper[4708]: I0227 17:18:03.665550 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536878-fv4q5" event={"ID":"da11d788-6fb8-42b3-bdcd-4228dde954c3","Type":"ContainerDied","Data":"c12b0eb6d81db8eb422e9b052ba45dc99776f9f251467608cfd43ea1104725df"} Feb 27 17:18:03 crc kubenswrapper[4708]: I0227 17:18:03.667326 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d218377-bee6-44e0-a6f7-ef62a33366e0","Type":"ContainerStarted","Data":"7828465de3a957dedeffc3edc19d7fae6816b9a0c8185192387ec81385cf8915"} Feb 27 17:18:03 crc kubenswrapper[4708]: I0227 17:18:03.667347 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d218377-bee6-44e0-a6f7-ef62a33366e0","Type":"ContainerStarted","Data":"adeb09faf9960e86ce37a17ff2754e5cc8b4166fec514a2374490fa285b9756c"} Feb 27 17:18:03 crc kubenswrapper[4708]: I0227 17:18:03.669413 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-89gd4" event={"ID":"8b9d6fda-ab96-4cea-8fec-2c49b206d095","Type":"ContainerStarted","Data":"951538edd3d6e38dde4fda6fc9bd796275ace6ed1fe672a1c7596568eaf0be79"} Feb 27 17:18:04 crc kubenswrapper[4708]: I0227 17:18:04.681051 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"1d218377-bee6-44e0-a6f7-ef62a33366e0","Type":"ContainerStarted","Data":"b268e34538e3f8e7d3a2bd75c19bdce0b54672c5da174526f857e8d69375161c"} Feb 27 17:18:04 crc kubenswrapper[4708]: I0227 17:18:04.692610 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerStarted","Data":"b1693c5fcb539856f2b1d6c2ae05787957cfee80f5e43a1acc1b45700050d6bb"} Feb 27 17:18:04 crc kubenswrapper[4708]: I0227 17:18:04.692789 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:18:04 crc kubenswrapper[4708]: I0227 17:18:04.723516 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.723495761 podStartE2EDuration="3.723495761s" podCreationTimestamp="2026-02-27 17:18:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:04.710772934 +0000 UTC m=+1483.226570531" watchObservedRunningTime="2026-02-27 17:18:04.723495761 +0000 UTC m=+1483.239293358" Feb 27 17:18:04 crc kubenswrapper[4708]: I0227 17:18:04.749124 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.544391184 podStartE2EDuration="8.749105961s" podCreationTimestamp="2026-02-27 17:17:56 +0000 UTC" firstStartedPulling="2026-02-27 17:17:57.333485393 +0000 UTC m=+1475.849282980" lastFinishedPulling="2026-02-27 17:18:03.53820017 +0000 UTC m=+1482.053997757" observedRunningTime="2026-02-27 17:18:04.739927563 +0000 UTC m=+1483.255725150" watchObservedRunningTime="2026-02-27 17:18:04.749105961 +0000 UTC m=+1483.264903548" Feb 27 17:18:05 crc kubenswrapper[4708]: I0227 17:18:05.152454 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-fv4q5" Feb 27 17:18:05 crc kubenswrapper[4708]: I0227 17:18:05.259304 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w5g6\" (UniqueName: \"kubernetes.io/projected/da11d788-6fb8-42b3-bdcd-4228dde954c3-kube-api-access-6w5g6\") pod \"da11d788-6fb8-42b3-bdcd-4228dde954c3\" (UID: \"da11d788-6fb8-42b3-bdcd-4228dde954c3\") " Feb 27 17:18:05 crc kubenswrapper[4708]: I0227 17:18:05.276166 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da11d788-6fb8-42b3-bdcd-4228dde954c3-kube-api-access-6w5g6" (OuterVolumeSpecName: "kube-api-access-6w5g6") pod "da11d788-6fb8-42b3-bdcd-4228dde954c3" (UID: "da11d788-6fb8-42b3-bdcd-4228dde954c3"). InnerVolumeSpecName "kube-api-access-6w5g6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:05 crc kubenswrapper[4708]: I0227 17:18:05.362648 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w5g6\" (UniqueName: \"kubernetes.io/projected/da11d788-6fb8-42b3-bdcd-4228dde954c3-kube-api-access-6w5g6\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:05 crc kubenswrapper[4708]: I0227 17:18:05.705588 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536878-fv4q5" Feb 27 17:18:05 crc kubenswrapper[4708]: I0227 17:18:05.705590 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536878-fv4q5" event={"ID":"da11d788-6fb8-42b3-bdcd-4228dde954c3","Type":"ContainerDied","Data":"71045bb9707ba3ede9d99822e2a2533d30b3b93378b3e525aaf5b044199baa91"} Feb 27 17:18:05 crc kubenswrapper[4708]: I0227 17:18:05.705651 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71045bb9707ba3ede9d99822e2a2533d30b3b93378b3e525aaf5b044199baa91" Feb 27 17:18:06 crc kubenswrapper[4708]: I0227 17:18:06.222229 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4tms"] Feb 27 17:18:06 crc kubenswrapper[4708]: I0227 17:18:06.238721 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536872-h4tms"] Feb 27 17:18:08 crc kubenswrapper[4708]: I0227 17:18:08.282916 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3343713b-a255-4d89-8501-22d02150e6ef" path="/var/lib/kubelet/pods/3343713b-a255-4d89-8501-22d02150e6ef/volumes" Feb 27 17:18:08 crc kubenswrapper[4708]: I0227 17:18:08.285140 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:18:08 crc kubenswrapper[4708]: I0227 17:18:08.285178 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Feb 27 17:18:08 crc kubenswrapper[4708]: I0227 17:18:08.336471 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:18:08 crc kubenswrapper[4708]: I0227 17:18:08.387502 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Feb 27 17:18:08 crc kubenswrapper[4708]: I0227 17:18:08.763892 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:18:08 crc kubenswrapper[4708]: I0227 17:18:08.763930 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Feb 27 17:18:10 crc kubenswrapper[4708]: I0227 17:18:10.779228 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:18:10 crc kubenswrapper[4708]: I0227 17:18:10.779501 4708 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 27 17:18:10 crc kubenswrapper[4708]: I0227 17:18:10.828262 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 27 17:18:11 crc kubenswrapper[4708]: I0227 17:18:11.260216 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:18:11 crc kubenswrapper[4708]: I0227 17:18:11.286322 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Feb 27 17:18:12 crc kubenswrapper[4708]: I0227 17:18:12.046772 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:12 crc kubenswrapper[4708]: I0227 17:18:12.046817 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:12 crc kubenswrapper[4708]: I0227 17:18:12.105213 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:12 crc kubenswrapper[4708]: I0227 17:18:12.107357 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:12 crc kubenswrapper[4708]: I0227 17:18:12.798139 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:12 crc kubenswrapper[4708]: I0227 17:18:12.798175 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.015565 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.016124 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="proxy-httpd" containerID="cri-o://b1693c5fcb539856f2b1d6c2ae05787957cfee80f5e43a1acc1b45700050d6bb" gracePeriod=30 Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.016161 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="sg-core" containerID="cri-o://4849b0f7eac085b0fa6889fe5a042ff990ae5d3e248647129d269329c2c11095" gracePeriod=30 Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.016224 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-notification-agent" containerID="cri-o://530584b49d7d7a5b4eccc55282fad1634c2c8ffccc12cf36c53ea4d3db030e3e" gracePeriod=30 Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.015815 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-central-agent" containerID="cri-o://460b651d793f3c91cae760c0edb1b0a5f7a3a7025aa2af33d22c631a4d561d5b" gracePeriod=30 Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.025684 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.209:3000/\": EOF" Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.808460 4708 generic.go:334] "Generic (PLEG): container finished" podID="d158057b-69db-47c4-8361-17ceba3ede55" containerID="b1693c5fcb539856f2b1d6c2ae05787957cfee80f5e43a1acc1b45700050d6bb" exitCode=0 Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.808724 4708 generic.go:334] "Generic (PLEG): container finished" podID="d158057b-69db-47c4-8361-17ceba3ede55" containerID="4849b0f7eac085b0fa6889fe5a042ff990ae5d3e248647129d269329c2c11095" exitCode=2 Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.808530 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerDied","Data":"b1693c5fcb539856f2b1d6c2ae05787957cfee80f5e43a1acc1b45700050d6bb"} Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.808769 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerDied","Data":"4849b0f7eac085b0fa6889fe5a042ff990ae5d3e248647129d269329c2c11095"} Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.808782 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerDied","Data":"460b651d793f3c91cae760c0edb1b0a5f7a3a7025aa2af33d22c631a4d561d5b"} Feb 27 17:18:13 crc kubenswrapper[4708]: I0227 17:18:13.808735 4708 generic.go:334] "Generic (PLEG): container finished" podID="d158057b-69db-47c4-8361-17ceba3ede55" containerID="460b651d793f3c91cae760c0edb1b0a5f7a3a7025aa2af33d22c631a4d561d5b" exitCode=0 Feb 27 17:18:14 crc kubenswrapper[4708]: I0227 17:18:14.820612 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:14 crc kubenswrapper[4708]: I0227 17:18:14.822761 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Feb 27 17:18:14 crc kubenswrapper[4708]: I0227 17:18:14.826669 4708 generic.go:334] "Generic (PLEG): container finished" podID="d158057b-69db-47c4-8361-17ceba3ede55" containerID="530584b49d7d7a5b4eccc55282fad1634c2c8ffccc12cf36c53ea4d3db030e3e" exitCode=0 Feb 27 17:18:14 crc kubenswrapper[4708]: I0227 17:18:14.827236 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerDied","Data":"530584b49d7d7a5b4eccc55282fad1634c2c8ffccc12cf36c53ea4d3db030e3e"} Feb 27 17:18:16 crc kubenswrapper[4708]: I0227 17:18:16.848616 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"d158057b-69db-47c4-8361-17ceba3ede55","Type":"ContainerDied","Data":"a8b66da4f8aef7ae70f4911d9efe25e9ae5a20fa3babec08f567f91cfa7b54a6"} Feb 27 17:18:16 crc kubenswrapper[4708]: I0227 17:18:16.849284 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8b66da4f8aef7ae70f4911d9efe25e9ae5a20fa3babec08f567f91cfa7b54a6" Feb 27 17:18:16 crc kubenswrapper[4708]: I0227 17:18:16.851073 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-89gd4" event={"ID":"8b9d6fda-ab96-4cea-8fec-2c49b206d095","Type":"ContainerStarted","Data":"ba68da5c684eed94fd18a76506eae572d8320c027a9fcd8a7d2a7df216c4b28a"} Feb 27 17:18:16 crc kubenswrapper[4708]: I0227 17:18:16.871564 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-89gd4" podStartSLOduration=1.964912489 podStartE2EDuration="14.871547662s" podCreationTimestamp="2026-02-27 17:18:02 +0000 UTC" firstStartedPulling="2026-02-27 17:18:03.599993786 +0000 UTC m=+1482.115791373" lastFinishedPulling="2026-02-27 17:18:16.506628959 +0000 UTC m=+1495.022426546" observedRunningTime="2026-02-27 17:18:16.870084151 +0000 UTC m=+1495.385881738" watchObservedRunningTime="2026-02-27 17:18:16.871547662 +0000 UTC m=+1495.387345249" Feb 27 17:18:16 crc kubenswrapper[4708]: I0227 17:18:16.876345 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.008747 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-run-httpd\") pod \"d158057b-69db-47c4-8361-17ceba3ede55\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.009283 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tflfp\" (UniqueName: \"kubernetes.io/projected/d158057b-69db-47c4-8361-17ceba3ede55-kube-api-access-tflfp\") pod \"d158057b-69db-47c4-8361-17ceba3ede55\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.009884 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-log-httpd\") pod \"d158057b-69db-47c4-8361-17ceba3ede55\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.009223 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "d158057b-69db-47c4-8361-17ceba3ede55" (UID: "d158057b-69db-47c4-8361-17ceba3ede55"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.010071 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-combined-ca-bundle\") pod \"d158057b-69db-47c4-8361-17ceba3ede55\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.010141 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-scripts\") pod \"d158057b-69db-47c4-8361-17ceba3ede55\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.010190 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-sg-core-conf-yaml\") pod \"d158057b-69db-47c4-8361-17ceba3ede55\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.010284 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-config-data\") pod \"d158057b-69db-47c4-8361-17ceba3ede55\" (UID: \"d158057b-69db-47c4-8361-17ceba3ede55\") " Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.010568 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "d158057b-69db-47c4-8361-17ceba3ede55" (UID: "d158057b-69db-47c4-8361-17ceba3ede55"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.011008 4708 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.011031 4708 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/d158057b-69db-47c4-8361-17ceba3ede55-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.016010 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d158057b-69db-47c4-8361-17ceba3ede55-kube-api-access-tflfp" (OuterVolumeSpecName: "kube-api-access-tflfp") pod "d158057b-69db-47c4-8361-17ceba3ede55" (UID: "d158057b-69db-47c4-8361-17ceba3ede55"). InnerVolumeSpecName "kube-api-access-tflfp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.016083 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-scripts" (OuterVolumeSpecName: "scripts") pod "d158057b-69db-47c4-8361-17ceba3ede55" (UID: "d158057b-69db-47c4-8361-17ceba3ede55"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.048438 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "d158057b-69db-47c4-8361-17ceba3ede55" (UID: "d158057b-69db-47c4-8361-17ceba3ede55"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.093433 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d158057b-69db-47c4-8361-17ceba3ede55" (UID: "d158057b-69db-47c4-8361-17ceba3ede55"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.112695 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tflfp\" (UniqueName: \"kubernetes.io/projected/d158057b-69db-47c4-8361-17ceba3ede55-kube-api-access-tflfp\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.112737 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.112751 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.112763 4708 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.123904 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-config-data" (OuterVolumeSpecName: "config-data") pod "d158057b-69db-47c4-8361-17ceba3ede55" (UID: "d158057b-69db-47c4-8361-17ceba3ede55"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.215134 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d158057b-69db-47c4-8361-17ceba3ede55-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.863752 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.919270 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.937782 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.951783 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:17 crc kubenswrapper[4708]: E0227 17:18:17.952330 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-notification-agent" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952355 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-notification-agent" Feb 27 17:18:17 crc kubenswrapper[4708]: E0227 17:18:17.952389 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da11d788-6fb8-42b3-bdcd-4228dde954c3" containerName="oc" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952398 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="da11d788-6fb8-42b3-bdcd-4228dde954c3" containerName="oc" Feb 27 17:18:17 crc kubenswrapper[4708]: E0227 17:18:17.952420 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="sg-core" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952428 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="sg-core" Feb 27 17:18:17 crc kubenswrapper[4708]: E0227 17:18:17.952446 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-central-agent" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952455 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-central-agent" Feb 27 17:18:17 crc kubenswrapper[4708]: E0227 17:18:17.952489 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="proxy-httpd" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952498 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="proxy-httpd" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952784 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="proxy-httpd" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952807 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="da11d788-6fb8-42b3-bdcd-4228dde954c3" containerName="oc" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952818 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="sg-core" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952834 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-central-agent" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.952939 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d158057b-69db-47c4-8361-17ceba3ede55" containerName="ceilometer-notification-agent" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.955252 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.956985 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.957062 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:18:17 crc kubenswrapper[4708]: I0227 17:18:17.962814 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.137027 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.137142 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-run-httpd\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.137275 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-scripts\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.137418 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-log-httpd\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.137477 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.137607 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtzhj\" (UniqueName: \"kubernetes.io/projected/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-kube-api-access-jtzhj\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.137732 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-config-data\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.239378 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-config-data\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.239471 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.239544 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-run-httpd\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.239582 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-scripts\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.239626 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-log-httpd\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.239668 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.239722 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jtzhj\" (UniqueName: \"kubernetes.io/projected/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-kube-api-access-jtzhj\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.240811 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-run-httpd\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.241043 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-log-httpd\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.245322 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-scripts\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.245671 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.258204 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.264084 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d158057b-69db-47c4-8361-17ceba3ede55" path="/var/lib/kubelet/pods/d158057b-69db-47c4-8361-17ceba3ede55/volumes" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.279466 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jtzhj\" (UniqueName: \"kubernetes.io/projected/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-kube-api-access-jtzhj\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.281053 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-config-data\") pod \"ceilometer-0\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " pod="openstack/ceilometer-0" Feb 27 17:18:18 crc kubenswrapper[4708]: I0227 17:18:18.574833 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:19 crc kubenswrapper[4708]: I0227 17:18:19.068664 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:19 crc kubenswrapper[4708]: I0227 17:18:19.891268 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerStarted","Data":"a30287092c7afe0d218e77ceb467ba766ccd65f3a36c8673e3721d0159d328ab"} Feb 27 17:18:20 crc kubenswrapper[4708]: I0227 17:18:20.905257 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerStarted","Data":"9a03affd3af6b8eb400d98166887110e7e2c7635a53dda9906a8ed2e0ddae35d"} Feb 27 17:18:20 crc kubenswrapper[4708]: I0227 17:18:20.905511 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerStarted","Data":"69410d999819e5a1f1752ac9a4a43cf0f85d5fb8cc17128335ebe0b607ca5ece"} Feb 27 17:18:21 crc kubenswrapper[4708]: I0227 17:18:21.881166 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:21 crc kubenswrapper[4708]: I0227 17:18:21.918395 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerStarted","Data":"09bba017e9ea0abb9ec3f38d9db19e439b09eb4cc97785bf47b5f8b2572e71f1"} Feb 27 17:18:23 crc kubenswrapper[4708]: I0227 17:18:23.937641 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerStarted","Data":"f1dbc32b3a2081d2ec4ab558d40443793d478d6e4e5458c524470310e4b81c00"} Feb 27 17:18:23 crc kubenswrapper[4708]: I0227 17:18:23.938031 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-central-agent" containerID="cri-o://69410d999819e5a1f1752ac9a4a43cf0f85d5fb8cc17128335ebe0b607ca5ece" gracePeriod=30 Feb 27 17:18:23 crc kubenswrapper[4708]: I0227 17:18:23.938280 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:18:23 crc kubenswrapper[4708]: I0227 17:18:23.938289 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="proxy-httpd" containerID="cri-o://f1dbc32b3a2081d2ec4ab558d40443793d478d6e4e5458c524470310e4b81c00" gracePeriod=30 Feb 27 17:18:23 crc kubenswrapper[4708]: I0227 17:18:23.938373 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="sg-core" containerID="cri-o://09bba017e9ea0abb9ec3f38d9db19e439b09eb4cc97785bf47b5f8b2572e71f1" gracePeriod=30 Feb 27 17:18:23 crc kubenswrapper[4708]: I0227 17:18:23.938414 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-notification-agent" containerID="cri-o://9a03affd3af6b8eb400d98166887110e7e2c7635a53dda9906a8ed2e0ddae35d" gracePeriod=30 Feb 27 17:18:23 crc kubenswrapper[4708]: I0227 17:18:23.967698 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.395633409 podStartE2EDuration="6.967676564s" podCreationTimestamp="2026-02-27 17:18:17 +0000 UTC" firstStartedPulling="2026-02-27 17:18:19.090631229 +0000 UTC m=+1497.606428836" lastFinishedPulling="2026-02-27 17:18:23.662674394 +0000 UTC m=+1502.178471991" observedRunningTime="2026-02-27 17:18:23.960976596 +0000 UTC m=+1502.476774193" watchObservedRunningTime="2026-02-27 17:18:23.967676564 +0000 UTC m=+1502.483474151" Feb 27 17:18:24 crc kubenswrapper[4708]: I0227 17:18:24.952478 4708 generic.go:334] "Generic (PLEG): container finished" podID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerID="09bba017e9ea0abb9ec3f38d9db19e439b09eb4cc97785bf47b5f8b2572e71f1" exitCode=2 Feb 27 17:18:24 crc kubenswrapper[4708]: I0227 17:18:24.952711 4708 generic.go:334] "Generic (PLEG): container finished" podID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerID="9a03affd3af6b8eb400d98166887110e7e2c7635a53dda9906a8ed2e0ddae35d" exitCode=0 Feb 27 17:18:24 crc kubenswrapper[4708]: I0227 17:18:24.952618 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerDied","Data":"09bba017e9ea0abb9ec3f38d9db19e439b09eb4cc97785bf47b5f8b2572e71f1"} Feb 27 17:18:24 crc kubenswrapper[4708]: I0227 17:18:24.952751 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerDied","Data":"9a03affd3af6b8eb400d98166887110e7e2c7635a53dda9906a8ed2e0ddae35d"} Feb 27 17:18:26 crc kubenswrapper[4708]: I0227 17:18:26.974516 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-89gd4" event={"ID":"8b9d6fda-ab96-4cea-8fec-2c49b206d095","Type":"ContainerDied","Data":"ba68da5c684eed94fd18a76506eae572d8320c027a9fcd8a7d2a7df216c4b28a"} Feb 27 17:18:26 crc kubenswrapper[4708]: I0227 17:18:26.974490 4708 generic.go:334] "Generic (PLEG): container finished" podID="8b9d6fda-ab96-4cea-8fec-2c49b206d095" containerID="ba68da5c684eed94fd18a76506eae572d8320c027a9fcd8a7d2a7df216c4b28a" exitCode=0 Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.497595 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.569644 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-config-data\") pod \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.569729 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-scripts\") pod \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.569991 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-combined-ca-bundle\") pod \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.570042 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qblq\" (UniqueName: \"kubernetes.io/projected/8b9d6fda-ab96-4cea-8fec-2c49b206d095-kube-api-access-9qblq\") pod \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\" (UID: \"8b9d6fda-ab96-4cea-8fec-2c49b206d095\") " Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.575940 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-scripts" (OuterVolumeSpecName: "scripts") pod "8b9d6fda-ab96-4cea-8fec-2c49b206d095" (UID: "8b9d6fda-ab96-4cea-8fec-2c49b206d095"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.578157 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b9d6fda-ab96-4cea-8fec-2c49b206d095-kube-api-access-9qblq" (OuterVolumeSpecName: "kube-api-access-9qblq") pod "8b9d6fda-ab96-4cea-8fec-2c49b206d095" (UID: "8b9d6fda-ab96-4cea-8fec-2c49b206d095"). InnerVolumeSpecName "kube-api-access-9qblq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.612911 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-config-data" (OuterVolumeSpecName: "config-data") pod "8b9d6fda-ab96-4cea-8fec-2c49b206d095" (UID: "8b9d6fda-ab96-4cea-8fec-2c49b206d095"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.627933 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b9d6fda-ab96-4cea-8fec-2c49b206d095" (UID: "8b9d6fda-ab96-4cea-8fec-2c49b206d095"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.672403 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.672447 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qblq\" (UniqueName: \"kubernetes.io/projected/8b9d6fda-ab96-4cea-8fec-2c49b206d095-kube-api-access-9qblq\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.672464 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:28 crc kubenswrapper[4708]: I0227 17:18:28.672475 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b9d6fda-ab96-4cea-8fec-2c49b206d095-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.001377 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-89gd4" event={"ID":"8b9d6fda-ab96-4cea-8fec-2c49b206d095","Type":"ContainerDied","Data":"951538edd3d6e38dde4fda6fc9bd796275ace6ed1fe672a1c7596568eaf0be79"} Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.001434 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="951538edd3d6e38dde4fda6fc9bd796275ace6ed1fe672a1c7596568eaf0be79" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.001444 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-89gd4" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.147686 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:18:29 crc kubenswrapper[4708]: E0227 17:18:29.148407 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b9d6fda-ab96-4cea-8fec-2c49b206d095" containerName="nova-cell0-conductor-db-sync" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.148437 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b9d6fda-ab96-4cea-8fec-2c49b206d095" containerName="nova-cell0-conductor-db-sync" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.148843 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b9d6fda-ab96-4cea-8fec-2c49b206d095" containerName="nova-cell0-conductor-db-sync" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.150010 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.152207 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.154013 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-k2cdd" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.160246 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.284194 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsffb\" (UniqueName: \"kubernetes.io/projected/d573ab41-daa8-4853-9698-d55d5e7664df-kube-api-access-dsffb\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.284281 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d573ab41-daa8-4853-9698-d55d5e7664df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.284306 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d573ab41-daa8-4853-9698-d55d5e7664df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.386360 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsffb\" (UniqueName: \"kubernetes.io/projected/d573ab41-daa8-4853-9698-d55d5e7664df-kube-api-access-dsffb\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.387078 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d573ab41-daa8-4853-9698-d55d5e7664df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.387116 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d573ab41-daa8-4853-9698-d55d5e7664df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.392697 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d573ab41-daa8-4853-9698-d55d5e7664df-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.393349 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d573ab41-daa8-4853-9698-d55d5e7664df-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.406402 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsffb\" (UniqueName: \"kubernetes.io/projected/d573ab41-daa8-4853-9698-d55d5e7664df-kube-api-access-dsffb\") pod \"nova-cell0-conductor-0\" (UID: \"d573ab41-daa8-4853-9698-d55d5e7664df\") " pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:29 crc kubenswrapper[4708]: I0227 17:18:29.548320 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:30 crc kubenswrapper[4708]: I0227 17:18:30.120075 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Feb 27 17:18:30 crc kubenswrapper[4708]: W0227 17:18:30.136877 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd573ab41_daa8_4853_9698_d55d5e7664df.slice/crio-2a17150c5f3830d4527b857fa8666e235430124602d85ccc5d4c298d8462fbfe WatchSource:0}: Error finding container 2a17150c5f3830d4527b857fa8666e235430124602d85ccc5d4c298d8462fbfe: Status 404 returned error can't find the container with id 2a17150c5f3830d4527b857fa8666e235430124602d85ccc5d4c298d8462fbfe Feb 27 17:18:31 crc kubenswrapper[4708]: I0227 17:18:31.055796 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d573ab41-daa8-4853-9698-d55d5e7664df","Type":"ContainerStarted","Data":"c8cc07222aaeefb1f1aabeeb1b3be86f2563d259673f309504b7bf76a67c66b5"} Feb 27 17:18:31 crc kubenswrapper[4708]: I0227 17:18:31.056628 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"d573ab41-daa8-4853-9698-d55d5e7664df","Type":"ContainerStarted","Data":"2a17150c5f3830d4527b857fa8666e235430124602d85ccc5d4c298d8462fbfe"} Feb 27 17:18:31 crc kubenswrapper[4708]: I0227 17:18:31.056719 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:31 crc kubenswrapper[4708]: I0227 17:18:31.097385 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.097359108 podStartE2EDuration="2.097359108s" podCreationTimestamp="2026-02-27 17:18:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:31.082491871 +0000 UTC m=+1509.598289488" watchObservedRunningTime="2026-02-27 17:18:31.097359108 +0000 UTC m=+1509.613156735" Feb 27 17:18:32 crc kubenswrapper[4708]: I0227 17:18:32.074315 4708 generic.go:334] "Generic (PLEG): container finished" podID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerID="69410d999819e5a1f1752ac9a4a43cf0f85d5fb8cc17128335ebe0b607ca5ece" exitCode=0 Feb 27 17:18:32 crc kubenswrapper[4708]: I0227 17:18:32.074369 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerDied","Data":"69410d999819e5a1f1752ac9a4a43cf0f85d5fb8cc17128335ebe0b607ca5ece"} Feb 27 17:18:39 crc kubenswrapper[4708]: I0227 17:18:39.594209 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.922288 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-g6m5b"] Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.925236 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.927685 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.932209 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.936883 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6m5b"] Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.969078 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xckvw\" (UniqueName: \"kubernetes.io/projected/c184c80c-f3fb-47ff-a8b7-46632aa678f4-kube-api-access-xckvw\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.969257 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.969681 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-config-data\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:40 crc kubenswrapper[4708]: I0227 17:18:40.969802 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-scripts\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.071886 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-config-data\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.071948 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-scripts\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.071997 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xckvw\" (UniqueName: \"kubernetes.io/projected/c184c80c-f3fb-47ff-a8b7-46632aa678f4-kube-api-access-xckvw\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.072043 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.080736 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-config-data\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.083411 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-scripts\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.089775 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.094539 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xckvw\" (UniqueName: \"kubernetes.io/projected/c184c80c-f3fb-47ff-a8b7-46632aa678f4-kube-api-access-xckvw\") pod \"nova-cell0-cell-mapping-g6m5b\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.160703 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.162622 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.169388 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.181201 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.224081 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.231576 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.235506 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.239506 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.241305 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.257508 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.263705 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.283678 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.283749 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-config-data\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.283775 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx5j6\" (UniqueName: \"kubernetes.io/projected/3578e95b-5d98-4904-80fe-4991f9079b45-kube-api-access-nx5j6\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.283835 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-config-data\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.283893 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-logs\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.283969 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.284028 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-277cj\" (UniqueName: \"kubernetes.io/projected/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-kube-api-access-277cj\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.284766 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423236 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-config-data\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423609 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-logs\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423672 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-config-data\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423708 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daddb606-0982-4484-b92c-b3209b382878-logs\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423727 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423762 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423805 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-277cj\" (UniqueName: \"kubernetes.io/projected/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-kube-api-access-277cj\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423830 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bx59x\" (UniqueName: \"kubernetes.io/projected/daddb606-0982-4484-b92c-b3209b382878-kube-api-access-bx59x\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423908 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423956 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-config-data\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.423977 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx5j6\" (UniqueName: \"kubernetes.io/projected/3578e95b-5d98-4904-80fe-4991f9079b45-kube-api-access-nx5j6\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.427760 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.428020 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-logs\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.431303 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-config-data\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.431700 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.443003 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-config-data\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.448481 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx5j6\" (UniqueName: \"kubernetes.io/projected/3578e95b-5d98-4904-80fe-4991f9079b45-kube-api-access-nx5j6\") pod \"nova-scheduler-0\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.453413 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-277cj\" (UniqueName: \"kubernetes.io/projected/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-kube-api-access-277cj\") pod \"nova-api-0\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.469720 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.513582 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.519458 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78cd565959-8qqxf"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.525514 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-config-data\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.525563 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daddb606-0982-4484-b92c-b3209b382878-logs\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.525594 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.525629 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bx59x\" (UniqueName: \"kubernetes.io/projected/daddb606-0982-4484-b92c-b3209b382878-kube-api-access-bx59x\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.526259 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daddb606-0982-4484-b92c-b3209b382878-logs\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.531244 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-config-data\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.541684 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.542096 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.543407 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-8qqxf"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.543484 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.543980 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.556497 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.557723 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.564082 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bx59x\" (UniqueName: \"kubernetes.io/projected/daddb606-0982-4484-b92c-b3209b382878-kube-api-access-bx59x\") pod \"nova-metadata-0\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.580283 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.614879 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734437 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734517 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgngj\" (UniqueName: \"kubernetes.io/projected/e57f23c6-0486-40ad-907d-7776d4d30404-kube-api-access-cgngj\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734539 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734572 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzr24\" (UniqueName: \"kubernetes.io/projected/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-kube-api-access-wzr24\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734606 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734624 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-config\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734699 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734727 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-svc\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.734765 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837179 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837249 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-svc\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837290 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837310 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837358 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgngj\" (UniqueName: \"kubernetes.io/projected/e57f23c6-0486-40ad-907d-7776d4d30404-kube-api-access-cgngj\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837374 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837409 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzr24\" (UniqueName: \"kubernetes.io/projected/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-kube-api-access-wzr24\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837442 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.837463 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-config\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.838325 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-config\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.838906 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-nb\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.841760 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-sb\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.842326 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-svc\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.845070 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-swift-storage-0\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.847796 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.851630 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.866902 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgngj\" (UniqueName: \"kubernetes.io/projected/e57f23c6-0486-40ad-907d-7776d4d30404-kube-api-access-cgngj\") pod \"dnsmasq-dns-78cd565959-8qqxf\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.874803 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzr24\" (UniqueName: \"kubernetes.io/projected/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-kube-api-access-wzr24\") pod \"nova-cell1-novncproxy-0\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.886019 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6m5b"] Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.894432 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:41 crc kubenswrapper[4708]: I0227 17:18:41.906577 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.104980 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.208548 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab","Type":"ContainerStarted","Data":"7adf9eb8783ae6254ad23bf0158cec99729ca6f4718223973b9bb88cb77e8ff1"} Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.210169 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6m5b" event={"ID":"c184c80c-f3fb-47ff-a8b7-46632aa678f4","Type":"ContainerStarted","Data":"cedd138eea0495c00a829c3602c72daf0fb3d86190b111cc6e3a5564698b51b7"} Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.218295 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zslk8"] Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.219635 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.221746 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.222017 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.244407 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zslk8"] Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.355390 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.360168 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-scripts\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.360223 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.360564 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mggxg\" (UniqueName: \"kubernetes.io/projected/a78712b6-2f4f-4d79-a561-f30af5ee5733-kube-api-access-mggxg\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.360668 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-config-data\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.462436 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mggxg\" (UniqueName: \"kubernetes.io/projected/a78712b6-2f4f-4d79-a561-f30af5ee5733-kube-api-access-mggxg\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.462497 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-config-data\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.462591 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-scripts\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.462623 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.471742 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.477415 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-config-data\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.478527 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-scripts\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.495759 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mggxg\" (UniqueName: \"kubernetes.io/projected/a78712b6-2f4f-4d79-a561-f30af5ee5733-kube-api-access-mggxg\") pod \"nova-cell1-conductor-db-sync-zslk8\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.547031 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.576475 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:42 crc kubenswrapper[4708]: W0227 17:18:42.599649 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3578e95b_5d98_4904_80fe_4991f9079b45.slice/crio-c8a10062d72c38a90a7b94f9116deb8b79aa8e4b92991b4a221188c13870990c WatchSource:0}: Error finding container c8a10062d72c38a90a7b94f9116deb8b79aa8e4b92991b4a221188c13870990c: Status 404 returned error can't find the container with id c8a10062d72c38a90a7b94f9116deb8b79aa8e4b92991b4a221188c13870990c Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.608464 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-8qqxf"] Feb 27 17:18:42 crc kubenswrapper[4708]: I0227 17:18:42.765430 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:18:42 crc kubenswrapper[4708]: W0227 17:18:42.780313 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode79c3134_0ea1_4730_9fc0_74f3b91d5fae.slice/crio-2e5b42b4eecf8a73ba377d71028caf57766d47481efe555bdc7a08f996b66244 WatchSource:0}: Error finding container 2e5b42b4eecf8a73ba377d71028caf57766d47481efe555bdc7a08f996b66244: Status 404 returned error can't find the container with id 2e5b42b4eecf8a73ba377d71028caf57766d47481efe555bdc7a08f996b66244 Feb 27 17:18:43 crc kubenswrapper[4708]: W0227 17:18:43.157913 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda78712b6_2f4f_4d79_a561_f30af5ee5733.slice/crio-2e2f2168f6f30bf31747ec8916d1b3f3d85d6b5e018073ed52f34973734ac7dd WatchSource:0}: Error finding container 2e2f2168f6f30bf31747ec8916d1b3f3d85d6b5e018073ed52f34973734ac7dd: Status 404 returned error can't find the container with id 2e2f2168f6f30bf31747ec8916d1b3f3d85d6b5e018073ed52f34973734ac7dd Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.189809 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zslk8"] Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.253121 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e79c3134-0ea1-4730-9fc0-74f3b91d5fae","Type":"ContainerStarted","Data":"2e5b42b4eecf8a73ba377d71028caf57766d47481efe555bdc7a08f996b66244"} Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.258719 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"daddb606-0982-4484-b92c-b3209b382878","Type":"ContainerStarted","Data":"2b72fe365c4657eef35e94552df0ad6e06e44d7297012908f5fdbac2fce4e2f6"} Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.260286 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3578e95b-5d98-4904-80fe-4991f9079b45","Type":"ContainerStarted","Data":"c8a10062d72c38a90a7b94f9116deb8b79aa8e4b92991b4a221188c13870990c"} Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.268245 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6m5b" event={"ID":"c184c80c-f3fb-47ff-a8b7-46632aa678f4","Type":"ContainerStarted","Data":"9869dc5390602739a9c7dad244f702f2d0930a201d13824b1426224bbea26287"} Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.277650 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zslk8" event={"ID":"a78712b6-2f4f-4d79-a561-f30af5ee5733","Type":"ContainerStarted","Data":"2e2f2168f6f30bf31747ec8916d1b3f3d85d6b5e018073ed52f34973734ac7dd"} Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.311589 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-g6m5b" podStartSLOduration=3.311574216 podStartE2EDuration="3.311574216s" podCreationTimestamp="2026-02-27 17:18:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:43.309277551 +0000 UTC m=+1521.825075138" watchObservedRunningTime="2026-02-27 17:18:43.311574216 +0000 UTC m=+1521.827371803" Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.312499 4708 generic.go:334] "Generic (PLEG): container finished" podID="e57f23c6-0486-40ad-907d-7776d4d30404" containerID="406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28" exitCode=0 Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.312555 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" event={"ID":"e57f23c6-0486-40ad-907d-7776d4d30404","Type":"ContainerDied","Data":"406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28"} Feb 27 17:18:43 crc kubenswrapper[4708]: I0227 17:18:43.312581 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" event={"ID":"e57f23c6-0486-40ad-907d-7776d4d30404","Type":"ContainerStarted","Data":"a58d47929b5784688a00fdc5276901520e843d3bb406bd381abf3d1caa0055de"} Feb 27 17:18:44 crc kubenswrapper[4708]: I0227 17:18:44.330462 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zslk8" event={"ID":"a78712b6-2f4f-4d79-a561-f30af5ee5733","Type":"ContainerStarted","Data":"ff80e8a3a3c8d6e369f2546d8302fd6601c6070bbf644506bd5132b037ec16fe"} Feb 27 17:18:44 crc kubenswrapper[4708]: I0227 17:18:44.337452 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" event={"ID":"e57f23c6-0486-40ad-907d-7776d4d30404","Type":"ContainerStarted","Data":"a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137"} Feb 27 17:18:44 crc kubenswrapper[4708]: I0227 17:18:44.372933 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-zslk8" podStartSLOduration=2.372910994 podStartE2EDuration="2.372910994s" podCreationTimestamp="2026-02-27 17:18:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:44.355354581 +0000 UTC m=+1522.871152198" watchObservedRunningTime="2026-02-27 17:18:44.372910994 +0000 UTC m=+1522.888708601" Feb 27 17:18:44 crc kubenswrapper[4708]: I0227 17:18:44.394628 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" podStartSLOduration=3.394603184 podStartE2EDuration="3.394603184s" podCreationTimestamp="2026-02-27 17:18:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:44.386208838 +0000 UTC m=+1522.902006445" watchObservedRunningTime="2026-02-27 17:18:44.394603184 +0000 UTC m=+1522.910400791" Feb 27 17:18:44 crc kubenswrapper[4708]: I0227 17:18:44.894618 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:44 crc kubenswrapper[4708]: I0227 17:18:44.911536 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:18:45 crc kubenswrapper[4708]: I0227 17:18:45.354466 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.366448 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab","Type":"ContainerStarted","Data":"44a9e2e848a4713daf5179ee089abb0edcbe3147214bce618b5fc5d4c52ec523"} Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.369177 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e79c3134-0ea1-4730-9fc0-74f3b91d5fae","Type":"ContainerStarted","Data":"b53ac37eb796c474a372bb2fb0eb15a25c785f9bf4f55d3d8bee5ec2e99f6e62"} Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.369456 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="e79c3134-0ea1-4730-9fc0-74f3b91d5fae" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://b53ac37eb796c474a372bb2fb0eb15a25c785f9bf4f55d3d8bee5ec2e99f6e62" gracePeriod=30 Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.376196 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"daddb606-0982-4484-b92c-b3209b382878","Type":"ContainerStarted","Data":"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7"} Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.386388 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3578e95b-5d98-4904-80fe-4991f9079b45","Type":"ContainerStarted","Data":"d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2"} Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.390210 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.281030317 podStartE2EDuration="5.390196252s" podCreationTimestamp="2026-02-27 17:18:41 +0000 UTC" firstStartedPulling="2026-02-27 17:18:42.782456409 +0000 UTC m=+1521.298253996" lastFinishedPulling="2026-02-27 17:18:45.891622344 +0000 UTC m=+1524.407419931" observedRunningTime="2026-02-27 17:18:46.386750295 +0000 UTC m=+1524.902547882" watchObservedRunningTime="2026-02-27 17:18:46.390196252 +0000 UTC m=+1524.905993839" Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.424877 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.142800584 podStartE2EDuration="5.424859206s" podCreationTimestamp="2026-02-27 17:18:41 +0000 UTC" firstStartedPulling="2026-02-27 17:18:42.609679035 +0000 UTC m=+1521.125476622" lastFinishedPulling="2026-02-27 17:18:45.891737657 +0000 UTC m=+1524.407535244" observedRunningTime="2026-02-27 17:18:46.424151006 +0000 UTC m=+1524.939948603" watchObservedRunningTime="2026-02-27 17:18:46.424859206 +0000 UTC m=+1524.940656793" Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.580588 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 17:18:46 crc kubenswrapper[4708]: I0227 17:18:46.896002 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:18:47 crc kubenswrapper[4708]: I0227 17:18:47.395655 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab","Type":"ContainerStarted","Data":"8b8ddac9c35a192fa3d28063f08a0c7b300e04795e1b185c1c354c4aaf512a9b"} Feb 27 17:18:47 crc kubenswrapper[4708]: I0227 17:18:47.398503 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"daddb606-0982-4484-b92c-b3209b382878","Type":"ContainerStarted","Data":"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6"} Feb 27 17:18:47 crc kubenswrapper[4708]: I0227 17:18:47.398827 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-metadata" containerID="cri-o://2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6" gracePeriod=30 Feb 27 17:18:47 crc kubenswrapper[4708]: I0227 17:18:47.398953 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-log" containerID="cri-o://c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7" gracePeriod=30 Feb 27 17:18:47 crc kubenswrapper[4708]: I0227 17:18:47.420452 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.583048462 podStartE2EDuration="6.420438007s" podCreationTimestamp="2026-02-27 17:18:41 +0000 UTC" firstStartedPulling="2026-02-27 17:18:42.135727799 +0000 UTC m=+1520.651525386" lastFinishedPulling="2026-02-27 17:18:45.973117344 +0000 UTC m=+1524.488914931" observedRunningTime="2026-02-27 17:18:47.412071072 +0000 UTC m=+1525.927868659" watchObservedRunningTime="2026-02-27 17:18:47.420438007 +0000 UTC m=+1525.936235594" Feb 27 17:18:47 crc kubenswrapper[4708]: I0227 17:18:47.440905 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.806118 podStartE2EDuration="6.440891462s" podCreationTimestamp="2026-02-27 17:18:41 +0000 UTC" firstStartedPulling="2026-02-27 17:18:42.355541665 +0000 UTC m=+1520.871339242" lastFinishedPulling="2026-02-27 17:18:45.990315107 +0000 UTC m=+1524.506112704" observedRunningTime="2026-02-27 17:18:47.433269908 +0000 UTC m=+1525.949067485" watchObservedRunningTime="2026-02-27 17:18:47.440891462 +0000 UTC m=+1525.956689049" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.082815 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.136735 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daddb606-0982-4484-b92c-b3209b382878-logs\") pod \"daddb606-0982-4484-b92c-b3209b382878\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.136945 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-config-data\") pod \"daddb606-0982-4484-b92c-b3209b382878\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.137091 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/daddb606-0982-4484-b92c-b3209b382878-logs" (OuterVolumeSpecName: "logs") pod "daddb606-0982-4484-b92c-b3209b382878" (UID: "daddb606-0982-4484-b92c-b3209b382878"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.137107 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-combined-ca-bundle\") pod \"daddb606-0982-4484-b92c-b3209b382878\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.137220 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bx59x\" (UniqueName: \"kubernetes.io/projected/daddb606-0982-4484-b92c-b3209b382878-kube-api-access-bx59x\") pod \"daddb606-0982-4484-b92c-b3209b382878\" (UID: \"daddb606-0982-4484-b92c-b3209b382878\") " Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.138030 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/daddb606-0982-4484-b92c-b3209b382878-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.157429 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/daddb606-0982-4484-b92c-b3209b382878-kube-api-access-bx59x" (OuterVolumeSpecName: "kube-api-access-bx59x") pod "daddb606-0982-4484-b92c-b3209b382878" (UID: "daddb606-0982-4484-b92c-b3209b382878"). InnerVolumeSpecName "kube-api-access-bx59x". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.172242 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "daddb606-0982-4484-b92c-b3209b382878" (UID: "daddb606-0982-4484-b92c-b3209b382878"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.189472 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-config-data" (OuterVolumeSpecName: "config-data") pod "daddb606-0982-4484-b92c-b3209b382878" (UID: "daddb606-0982-4484-b92c-b3209b382878"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.240384 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.241755 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/daddb606-0982-4484-b92c-b3209b382878-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.242004 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bx59x\" (UniqueName: \"kubernetes.io/projected/daddb606-0982-4484-b92c-b3209b382878-kube-api-access-bx59x\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.418751 4708 generic.go:334] "Generic (PLEG): container finished" podID="daddb606-0982-4484-b92c-b3209b382878" containerID="2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6" exitCode=0 Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.418826 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.418826 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"daddb606-0982-4484-b92c-b3209b382878","Type":"ContainerDied","Data":"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6"} Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.420094 4708 scope.go:117] "RemoveContainer" containerID="2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.420923 4708 generic.go:334] "Generic (PLEG): container finished" podID="daddb606-0982-4484-b92c-b3209b382878" containerID="c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7" exitCode=143 Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.420964 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"daddb606-0982-4484-b92c-b3209b382878","Type":"ContainerDied","Data":"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7"} Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.421273 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"daddb606-0982-4484-b92c-b3209b382878","Type":"ContainerDied","Data":"2b72fe365c4657eef35e94552df0ad6e06e44d7297012908f5fdbac2fce4e2f6"} Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.469682 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.486267 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.497126 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:48 crc kubenswrapper[4708]: E0227 17:18:48.497593 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-log" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.497608 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-log" Feb 27 17:18:48 crc kubenswrapper[4708]: E0227 17:18:48.497637 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-metadata" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.497645 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-metadata" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.497822 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-log" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.497840 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="daddb606-0982-4484-b92c-b3209b382878" containerName="nova-metadata-metadata" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.498922 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.505261 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.505691 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.540644 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.548226 4708 scope.go:117] "RemoveContainer" containerID="c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.554475 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.554653 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.554988 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bc3b12-24d5-4a76-b182-8701a65e0021-logs\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.555238 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-config-data\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.555307 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mln2\" (UniqueName: \"kubernetes.io/projected/b6bc3b12-24d5-4a76-b182-8701a65e0021-kube-api-access-6mln2\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.584109 4708 scope.go:117] "RemoveContainer" containerID="2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6" Feb 27 17:18:48 crc kubenswrapper[4708]: E0227 17:18:48.584706 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6\": container with ID starting with 2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6 not found: ID does not exist" containerID="2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.584749 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6"} err="failed to get container status \"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6\": rpc error: code = NotFound desc = could not find container \"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6\": container with ID starting with 2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6 not found: ID does not exist" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.584779 4708 scope.go:117] "RemoveContainer" containerID="c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7" Feb 27 17:18:48 crc kubenswrapper[4708]: E0227 17:18:48.585220 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7\": container with ID starting with c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7 not found: ID does not exist" containerID="c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.585249 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7"} err="failed to get container status \"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7\": rpc error: code = NotFound desc = could not find container \"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7\": container with ID starting with c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7 not found: ID does not exist" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.585264 4708 scope.go:117] "RemoveContainer" containerID="2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.587091 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6"} err="failed to get container status \"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6\": rpc error: code = NotFound desc = could not find container \"2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6\": container with ID starting with 2671c6d38e9178588e0dbd6a42808fa791aa038f358b1372dc2996522d492de6 not found: ID does not exist" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.587135 4708 scope.go:117] "RemoveContainer" containerID="c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.587560 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7"} err="failed to get container status \"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7\": rpc error: code = NotFound desc = could not find container \"c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7\": container with ID starting with c63f1f220c994096789f3f056566b6e906a1901606a7ef951f8c7f101e00fbc7 not found: ID does not exist" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.624502 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 503" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.657511 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.657593 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.657693 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bc3b12-24d5-4a76-b182-8701a65e0021-logs\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.657763 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-config-data\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.657794 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6mln2\" (UniqueName: \"kubernetes.io/projected/b6bc3b12-24d5-4a76-b182-8701a65e0021-kube-api-access-6mln2\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.658333 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bc3b12-24d5-4a76-b182-8701a65e0021-logs\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.662666 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-config-data\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.662792 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.663029 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.696636 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mln2\" (UniqueName: \"kubernetes.io/projected/b6bc3b12-24d5-4a76-b182-8701a65e0021-kube-api-access-6mln2\") pod \"nova-metadata-0\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " pod="openstack/nova-metadata-0" Feb 27 17:18:48 crc kubenswrapper[4708]: I0227 17:18:48.839052 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:49 crc kubenswrapper[4708]: I0227 17:18:49.379116 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:49 crc kubenswrapper[4708]: W0227 17:18:49.382524 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb6bc3b12_24d5_4a76_b182_8701a65e0021.slice/crio-f806d8b25f308de2e2e7e82728cb9ffb2e6fd60f53c71fff5e368f2e91b15322 WatchSource:0}: Error finding container f806d8b25f308de2e2e7e82728cb9ffb2e6fd60f53c71fff5e368f2e91b15322: Status 404 returned error can't find the container with id f806d8b25f308de2e2e7e82728cb9ffb2e6fd60f53c71fff5e368f2e91b15322 Feb 27 17:18:49 crc kubenswrapper[4708]: I0227 17:18:49.439244 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bc3b12-24d5-4a76-b182-8701a65e0021","Type":"ContainerStarted","Data":"f806d8b25f308de2e2e7e82728cb9ffb2e6fd60f53c71fff5e368f2e91b15322"} Feb 27 17:18:49 crc kubenswrapper[4708]: I0227 17:18:49.442771 4708 generic.go:334] "Generic (PLEG): container finished" podID="c184c80c-f3fb-47ff-a8b7-46632aa678f4" containerID="9869dc5390602739a9c7dad244f702f2d0930a201d13824b1426224bbea26287" exitCode=0 Feb 27 17:18:49 crc kubenswrapper[4708]: I0227 17:18:49.442813 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6m5b" event={"ID":"c184c80c-f3fb-47ff-a8b7-46632aa678f4","Type":"ContainerDied","Data":"9869dc5390602739a9c7dad244f702f2d0930a201d13824b1426224bbea26287"} Feb 27 17:18:50 crc kubenswrapper[4708]: I0227 17:18:50.239622 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="daddb606-0982-4484-b92c-b3209b382878" path="/var/lib/kubelet/pods/daddb606-0982-4484-b92c-b3209b382878/volumes" Feb 27 17:18:50 crc kubenswrapper[4708]: I0227 17:18:50.460923 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bc3b12-24d5-4a76-b182-8701a65e0021","Type":"ContainerStarted","Data":"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4"} Feb 27 17:18:50 crc kubenswrapper[4708]: I0227 17:18:50.460980 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bc3b12-24d5-4a76-b182-8701a65e0021","Type":"ContainerStarted","Data":"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32"} Feb 27 17:18:50 crc kubenswrapper[4708]: I0227 17:18:50.506493 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.506465953 podStartE2EDuration="2.506465953s" podCreationTimestamp="2026-02-27 17:18:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:50.485321658 +0000 UTC m=+1529.001119285" watchObservedRunningTime="2026-02-27 17:18:50.506465953 +0000 UTC m=+1529.022263590" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.028145 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.113769 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-config-data\") pod \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.114141 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-combined-ca-bundle\") pod \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.114291 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xckvw\" (UniqueName: \"kubernetes.io/projected/c184c80c-f3fb-47ff-a8b7-46632aa678f4-kube-api-access-xckvw\") pod \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.114389 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-scripts\") pod \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\" (UID: \"c184c80c-f3fb-47ff-a8b7-46632aa678f4\") " Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.130606 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c184c80c-f3fb-47ff-a8b7-46632aa678f4-kube-api-access-xckvw" (OuterVolumeSpecName: "kube-api-access-xckvw") pod "c184c80c-f3fb-47ff-a8b7-46632aa678f4" (UID: "c184c80c-f3fb-47ff-a8b7-46632aa678f4"). InnerVolumeSpecName "kube-api-access-xckvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.131785 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-scripts" (OuterVolumeSpecName: "scripts") pod "c184c80c-f3fb-47ff-a8b7-46632aa678f4" (UID: "c184c80c-f3fb-47ff-a8b7-46632aa678f4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.167410 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c184c80c-f3fb-47ff-a8b7-46632aa678f4" (UID: "c184c80c-f3fb-47ff-a8b7-46632aa678f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.170113 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-config-data" (OuterVolumeSpecName: "config-data") pod "c184c80c-f3fb-47ff-a8b7-46632aa678f4" (UID: "c184c80c-f3fb-47ff-a8b7-46632aa678f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.218892 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.218936 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.218948 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xckvw\" (UniqueName: \"kubernetes.io/projected/c184c80c-f3fb-47ff-a8b7-46632aa678f4-kube-api-access-xckvw\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.218958 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c184c80c-f3fb-47ff-a8b7-46632aa678f4-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.478193 4708 generic.go:334] "Generic (PLEG): container finished" podID="a78712b6-2f4f-4d79-a561-f30af5ee5733" containerID="ff80e8a3a3c8d6e369f2546d8302fd6601c6070bbf644506bd5132b037ec16fe" exitCode=0 Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.478322 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zslk8" event={"ID":"a78712b6-2f4f-4d79-a561-f30af5ee5733","Type":"ContainerDied","Data":"ff80e8a3a3c8d6e369f2546d8302fd6601c6070bbf644506bd5132b037ec16fe"} Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.481485 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-g6m5b" event={"ID":"c184c80c-f3fb-47ff-a8b7-46632aa678f4","Type":"ContainerDied","Data":"cedd138eea0495c00a829c3602c72daf0fb3d86190b111cc6e3a5564698b51b7"} Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.481541 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cedd138eea0495c00a829c3602c72daf0fb3d86190b111cc6e3a5564698b51b7" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.481543 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-g6m5b" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.515575 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.515655 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.581128 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.632312 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.697865 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.728068 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.754786 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:51 crc kubenswrapper[4708]: I0227 17:18:51.909083 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.004135 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mqlv7"] Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.004435 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" podUID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerName="dnsmasq-dns" containerID="cri-o://e768f6c202a42cf223a6c5ebae7a5124171aa2f15f8fc231fc07edab9677ad47" gracePeriod=10 Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.492912 4708 generic.go:334] "Generic (PLEG): container finished" podID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerID="e768f6c202a42cf223a6c5ebae7a5124171aa2f15f8fc231fc07edab9677ad47" exitCode=0 Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.493029 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" event={"ID":"562eb33e-5d66-4492-ba9b-dda2b6666471","Type":"ContainerDied","Data":"e768f6c202a42cf223a6c5ebae7a5124171aa2f15f8fc231fc07edab9677ad47"} Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.493103 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" event={"ID":"562eb33e-5d66-4492-ba9b-dda2b6666471","Type":"ContainerDied","Data":"f81035c3adb51c26073a5d5f1510c1d0eb02762a99e4fcf29ebc18e0ce491b26"} Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.493120 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f81035c3adb51c26073a5d5f1510c1d0eb02762a99e4fcf29ebc18e0ce491b26" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.493156 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-log" containerID="cri-o://81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32" gracePeriod=30 Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.493245 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-metadata" containerID="cri-o://4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4" gracePeriod=30 Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.493359 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-log" containerID="cri-o://44a9e2e848a4713daf5179ee089abb0edcbe3147214bce618b5fc5d4c52ec523" gracePeriod=30 Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.493558 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-api" containerID="cri-o://8b8ddac9c35a192fa3d28063f08a0c7b300e04795e1b185c1c354c4aaf512a9b" gracePeriod=30 Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.505513 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": EOF" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.505689 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.217:8774/\": EOF" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.554232 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.584282 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.653982 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-nb\") pod \"562eb33e-5d66-4492-ba9b-dda2b6666471\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.654029 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-svc\") pod \"562eb33e-5d66-4492-ba9b-dda2b6666471\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.654062 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-sb\") pod \"562eb33e-5d66-4492-ba9b-dda2b6666471\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.654146 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzbbc\" (UniqueName: \"kubernetes.io/projected/562eb33e-5d66-4492-ba9b-dda2b6666471-kube-api-access-wzbbc\") pod \"562eb33e-5d66-4492-ba9b-dda2b6666471\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.654258 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-config\") pod \"562eb33e-5d66-4492-ba9b-dda2b6666471\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.654323 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-swift-storage-0\") pod \"562eb33e-5d66-4492-ba9b-dda2b6666471\" (UID: \"562eb33e-5d66-4492-ba9b-dda2b6666471\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.676088 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/562eb33e-5d66-4492-ba9b-dda2b6666471-kube-api-access-wzbbc" (OuterVolumeSpecName: "kube-api-access-wzbbc") pod "562eb33e-5d66-4492-ba9b-dda2b6666471" (UID: "562eb33e-5d66-4492-ba9b-dda2b6666471"). InnerVolumeSpecName "kube-api-access-wzbbc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.736562 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "562eb33e-5d66-4492-ba9b-dda2b6666471" (UID: "562eb33e-5d66-4492-ba9b-dda2b6666471"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.751027 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "562eb33e-5d66-4492-ba9b-dda2b6666471" (UID: "562eb33e-5d66-4492-ba9b-dda2b6666471"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.756583 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzbbc\" (UniqueName: \"kubernetes.io/projected/562eb33e-5d66-4492-ba9b-dda2b6666471-kube-api-access-wzbbc\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.756599 4708 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.756610 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.759006 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "562eb33e-5d66-4492-ba9b-dda2b6666471" (UID: "562eb33e-5d66-4492-ba9b-dda2b6666471"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.797747 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-config" (OuterVolumeSpecName: "config") pod "562eb33e-5d66-4492-ba9b-dda2b6666471" (UID: "562eb33e-5d66-4492-ba9b-dda2b6666471"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.800012 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "562eb33e-5d66-4492-ba9b-dda2b6666471" (UID: "562eb33e-5d66-4492-ba9b-dda2b6666471"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.858576 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.858606 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.858615 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/562eb33e-5d66-4492-ba9b-dda2b6666471-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.878737 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.960481 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mggxg\" (UniqueName: \"kubernetes.io/projected/a78712b6-2f4f-4d79-a561-f30af5ee5733-kube-api-access-mggxg\") pod \"a78712b6-2f4f-4d79-a561-f30af5ee5733\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.960693 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-config-data\") pod \"a78712b6-2f4f-4d79-a561-f30af5ee5733\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.960875 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-scripts\") pod \"a78712b6-2f4f-4d79-a561-f30af5ee5733\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.960945 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-combined-ca-bundle\") pod \"a78712b6-2f4f-4d79-a561-f30af5ee5733\" (UID: \"a78712b6-2f4f-4d79-a561-f30af5ee5733\") " Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.982315 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a78712b6-2f4f-4d79-a561-f30af5ee5733-kube-api-access-mggxg" (OuterVolumeSpecName: "kube-api-access-mggxg") pod "a78712b6-2f4f-4d79-a561-f30af5ee5733" (UID: "a78712b6-2f4f-4d79-a561-f30af5ee5733"). InnerVolumeSpecName "kube-api-access-mggxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.982350 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-scripts" (OuterVolumeSpecName: "scripts") pod "a78712b6-2f4f-4d79-a561-f30af5ee5733" (UID: "a78712b6-2f4f-4d79-a561-f30af5ee5733"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:52 crc kubenswrapper[4708]: I0227 17:18:52.988495 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-config-data" (OuterVolumeSpecName: "config-data") pod "a78712b6-2f4f-4d79-a561-f30af5ee5733" (UID: "a78712b6-2f4f-4d79-a561-f30af5ee5733"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.000989 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a78712b6-2f4f-4d79-a561-f30af5ee5733" (UID: "a78712b6-2f4f-4d79-a561-f30af5ee5733"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.072779 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.072807 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.072817 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a78712b6-2f4f-4d79-a561-f30af5ee5733-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.072829 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mggxg\" (UniqueName: \"kubernetes.io/projected/a78712b6-2f4f-4d79-a561-f30af5ee5733-kube-api-access-mggxg\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.091278 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.180559 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-combined-ca-bundle\") pod \"b6bc3b12-24d5-4a76-b182-8701a65e0021\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.180650 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-config-data\") pod \"b6bc3b12-24d5-4a76-b182-8701a65e0021\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.180737 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mln2\" (UniqueName: \"kubernetes.io/projected/b6bc3b12-24d5-4a76-b182-8701a65e0021-kube-api-access-6mln2\") pod \"b6bc3b12-24d5-4a76-b182-8701a65e0021\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.180805 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bc3b12-24d5-4a76-b182-8701a65e0021-logs\") pod \"b6bc3b12-24d5-4a76-b182-8701a65e0021\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.180861 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-nova-metadata-tls-certs\") pod \"b6bc3b12-24d5-4a76-b182-8701a65e0021\" (UID: \"b6bc3b12-24d5-4a76-b182-8701a65e0021\") " Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.181180 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b6bc3b12-24d5-4a76-b182-8701a65e0021-logs" (OuterVolumeSpecName: "logs") pod "b6bc3b12-24d5-4a76-b182-8701a65e0021" (UID: "b6bc3b12-24d5-4a76-b182-8701a65e0021"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.181277 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b6bc3b12-24d5-4a76-b182-8701a65e0021-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.199866 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6bc3b12-24d5-4a76-b182-8701a65e0021-kube-api-access-6mln2" (OuterVolumeSpecName: "kube-api-access-6mln2") pod "b6bc3b12-24d5-4a76-b182-8701a65e0021" (UID: "b6bc3b12-24d5-4a76-b182-8701a65e0021"). InnerVolumeSpecName "kube-api-access-6mln2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.224193 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b6bc3b12-24d5-4a76-b182-8701a65e0021" (UID: "b6bc3b12-24d5-4a76-b182-8701a65e0021"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.255259 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-config-data" (OuterVolumeSpecName: "config-data") pod "b6bc3b12-24d5-4a76-b182-8701a65e0021" (UID: "b6bc3b12-24d5-4a76-b182-8701a65e0021"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.261626 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b6bc3b12-24d5-4a76-b182-8701a65e0021" (UID: "b6bc3b12-24d5-4a76-b182-8701a65e0021"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.284826 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.284995 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.285010 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6mln2\" (UniqueName: \"kubernetes.io/projected/b6bc3b12-24d5-4a76-b182-8701a65e0021-kube-api-access-6mln2\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.285019 4708 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b6bc3b12-24d5-4a76-b182-8701a65e0021-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.510002 4708 generic.go:334] "Generic (PLEG): container finished" podID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerID="44a9e2e848a4713daf5179ee089abb0edcbe3147214bce618b5fc5d4c52ec523" exitCode=143 Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.510071 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab","Type":"ContainerDied","Data":"44a9e2e848a4713daf5179ee089abb0edcbe3147214bce618b5fc5d4c52ec523"} Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.512025 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-zslk8" event={"ID":"a78712b6-2f4f-4d79-a561-f30af5ee5733","Type":"ContainerDied","Data":"2e2f2168f6f30bf31747ec8916d1b3f3d85d6b5e018073ed52f34973734ac7dd"} Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.512058 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e2f2168f6f30bf31747ec8916d1b3f3d85d6b5e018073ed52f34973734ac7dd" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.512124 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-zslk8" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.520362 4708 generic.go:334] "Generic (PLEG): container finished" podID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerID="4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4" exitCode=0 Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.520421 4708 generic.go:334] "Generic (PLEG): container finished" podID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerID="81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32" exitCode=143 Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.520692 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="3578e95b-5d98-4904-80fe-4991f9079b45" containerName="nova-scheduler-scheduler" containerID="cri-o://d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" gracePeriod=30 Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.521063 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-67bdc55879-mqlv7" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.521087 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.521213 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bc3b12-24d5-4a76-b182-8701a65e0021","Type":"ContainerDied","Data":"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4"} Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.521283 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bc3b12-24d5-4a76-b182-8701a65e0021","Type":"ContainerDied","Data":"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32"} Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.521302 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b6bc3b12-24d5-4a76-b182-8701a65e0021","Type":"ContainerDied","Data":"f806d8b25f308de2e2e7e82728cb9ffb2e6fd60f53c71fff5e368f2e91b15322"} Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.521321 4708 scope.go:117] "RemoveContainer" containerID="4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.591504 4708 scope.go:117] "RemoveContainer" containerID="81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.609973 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.610391 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-log" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610402 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-log" Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.610412 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerName="dnsmasq-dns" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610417 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerName="dnsmasq-dns" Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.610446 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerName="init" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610452 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerName="init" Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.610464 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a78712b6-2f4f-4d79-a561-f30af5ee5733" containerName="nova-cell1-conductor-db-sync" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610470 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a78712b6-2f4f-4d79-a561-f30af5ee5733" containerName="nova-cell1-conductor-db-sync" Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.610481 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-metadata" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610486 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-metadata" Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.610496 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c184c80c-f3fb-47ff-a8b7-46632aa678f4" containerName="nova-manage" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610501 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c184c80c-f3fb-47ff-a8b7-46632aa678f4" containerName="nova-manage" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610678 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a78712b6-2f4f-4d79-a561-f30af5ee5733" containerName="nova-cell1-conductor-db-sync" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610692 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="562eb33e-5d66-4492-ba9b-dda2b6666471" containerName="dnsmasq-dns" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610709 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-log" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610718 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c184c80c-f3fb-47ff-a8b7-46632aa678f4" containerName="nova-manage" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.610727 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" containerName="nova-metadata-metadata" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.611462 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.614468 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.622015 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mqlv7"] Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.640455 4708 scope.go:117] "RemoveContainer" containerID="4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4" Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.642995 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4\": container with ID starting with 4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4 not found: ID does not exist" containerID="4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.643082 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4"} err="failed to get container status \"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4\": rpc error: code = NotFound desc = could not find container \"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4\": container with ID starting with 4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4 not found: ID does not exist" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.643116 4708 scope.go:117] "RemoveContainer" containerID="81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.644520 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-67bdc55879-mqlv7"] Feb 27 17:18:53 crc kubenswrapper[4708]: E0227 17:18:53.645457 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32\": container with ID starting with 81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32 not found: ID does not exist" containerID="81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.645495 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32"} err="failed to get container status \"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32\": rpc error: code = NotFound desc = could not find container \"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32\": container with ID starting with 81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32 not found: ID does not exist" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.645521 4708 scope.go:117] "RemoveContainer" containerID="4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.645891 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4"} err="failed to get container status \"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4\": rpc error: code = NotFound desc = could not find container \"4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4\": container with ID starting with 4e10f155a32aebd58b5514c5feebc05ef953eba900a424b6f6b09f041f2370d4 not found: ID does not exist" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.645913 4708 scope.go:117] "RemoveContainer" containerID="81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.646060 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32"} err="failed to get container status \"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32\": rpc error: code = NotFound desc = could not find container \"81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32\": container with ID starting with 81b272d85d34433e7ffd89479b82f3b81b2a6963b22f8e3fea252674b6abff32 not found: ID does not exist" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.662912 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.671739 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.680979 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.690521 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.692346 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.695134 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.695237 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.695518 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a0dd32-9089-4ca1-8814-b78372b68724-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.695561 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9fbn\" (UniqueName: \"kubernetes.io/projected/b4a0dd32-9089-4ca1-8814-b78372b68724-kube-api-access-x9fbn\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.695592 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a0dd32-9089-4ca1-8814-b78372b68724-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.714540 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.800676 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.800732 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9fbn\" (UniqueName: \"kubernetes.io/projected/b4a0dd32-9089-4ca1-8814-b78372b68724-kube-api-access-x9fbn\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.800773 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a0dd32-9089-4ca1-8814-b78372b68724-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.800820 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s7vm\" (UniqueName: \"kubernetes.io/projected/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-kube-api-access-2s7vm\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.801074 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-logs\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.801137 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-config-data\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.801161 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a0dd32-9089-4ca1-8814-b78372b68724-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.801180 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.807590 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b4a0dd32-9089-4ca1-8814-b78372b68724-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.808924 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b4a0dd32-9089-4ca1-8814-b78372b68724-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.825340 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9fbn\" (UniqueName: \"kubernetes.io/projected/b4a0dd32-9089-4ca1-8814-b78372b68724-kube-api-access-x9fbn\") pod \"nova-cell1-conductor-0\" (UID: \"b4a0dd32-9089-4ca1-8814-b78372b68724\") " pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.902831 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2s7vm\" (UniqueName: \"kubernetes.io/projected/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-kube-api-access-2s7vm\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.903258 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-logs\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.903587 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-logs\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.903647 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-config-data\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.903672 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.904073 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.906976 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.907066 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.907251 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-config-data\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.922256 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2s7vm\" (UniqueName: \"kubernetes.io/projected/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-kube-api-access-2s7vm\") pod \"nova-metadata-0\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " pod="openstack/nova-metadata-0" Feb 27 17:18:53 crc kubenswrapper[4708]: I0227 17:18:53.945523 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.014929 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.238637 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="562eb33e-5d66-4492-ba9b-dda2b6666471" path="/var/lib/kubelet/pods/562eb33e-5d66-4492-ba9b-dda2b6666471/volumes" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.239559 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6bc3b12-24d5-4a76-b182-8701a65e0021" path="/var/lib/kubelet/pods/b6bc3b12-24d5-4a76-b182-8701a65e0021/volumes" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.407067 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.536975 4708 generic.go:334] "Generic (PLEG): container finished" podID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerID="f1dbc32b3a2081d2ec4ab558d40443793d478d6e4e5458c524470310e4b81c00" exitCode=137 Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.537032 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerDied","Data":"f1dbc32b3a2081d2ec4ab558d40443793d478d6e4e5458c524470310e4b81c00"} Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.537080 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9d911ddc-1b45-4338-a04b-bd45fa68c6b3","Type":"ContainerDied","Data":"a30287092c7afe0d218e77ceb467ba766ccd65f3a36c8673e3721d0159d328ab"} Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.537095 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a30287092c7afe0d218e77ceb467ba766ccd65f3a36c8673e3721d0159d328ab" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.540584 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b4a0dd32-9089-4ca1-8814-b78372b68724","Type":"ContainerStarted","Data":"83fd02ae0244b4b6a9ccd370f3953742414c6f2430f5f8a360e19c8b0f8f7103"} Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.564286 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.600991 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.619639 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-config-data\") pod \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.619720 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-log-httpd\") pod \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.619751 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-sg-core-conf-yaml\") pod \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.619948 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-scripts\") pod \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.620274 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jtzhj\" (UniqueName: \"kubernetes.io/projected/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-kube-api-access-jtzhj\") pod \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.620327 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-combined-ca-bundle\") pod \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.620361 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-run-httpd\") pod \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\" (UID: \"9d911ddc-1b45-4338-a04b-bd45fa68c6b3\") " Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.621758 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9d911ddc-1b45-4338-a04b-bd45fa68c6b3" (UID: "9d911ddc-1b45-4338-a04b-bd45fa68c6b3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.624186 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9d911ddc-1b45-4338-a04b-bd45fa68c6b3" (UID: "9d911ddc-1b45-4338-a04b-bd45fa68c6b3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.627807 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-scripts" (OuterVolumeSpecName: "scripts") pod "9d911ddc-1b45-4338-a04b-bd45fa68c6b3" (UID: "9d911ddc-1b45-4338-a04b-bd45fa68c6b3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.628703 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-kube-api-access-jtzhj" (OuterVolumeSpecName: "kube-api-access-jtzhj") pod "9d911ddc-1b45-4338-a04b-bd45fa68c6b3" (UID: "9d911ddc-1b45-4338-a04b-bd45fa68c6b3"). InnerVolumeSpecName "kube-api-access-jtzhj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.673449 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9d911ddc-1b45-4338-a04b-bd45fa68c6b3" (UID: "9d911ddc-1b45-4338-a04b-bd45fa68c6b3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.705262 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9d911ddc-1b45-4338-a04b-bd45fa68c6b3" (UID: "9d911ddc-1b45-4338-a04b-bd45fa68c6b3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.723593 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jtzhj\" (UniqueName: \"kubernetes.io/projected/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-kube-api-access-jtzhj\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.723627 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.723639 4708 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.723648 4708 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.723657 4708 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.723666 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.732144 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-config-data" (OuterVolumeSpecName: "config-data") pod "9d911ddc-1b45-4338-a04b-bd45fa68c6b3" (UID: "9d911ddc-1b45-4338-a04b-bd45fa68c6b3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:54 crc kubenswrapper[4708]: I0227 17:18:54.825588 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9d911ddc-1b45-4338-a04b-bd45fa68c6b3-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.550938 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"b4a0dd32-9089-4ca1-8814-b78372b68724","Type":"ContainerStarted","Data":"41a37c123316b92eee174c10fa2c27ddcfdaf1afbf22a3aebc42edbe0825c2d2"} Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.551283 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.553311 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.553380 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d","Type":"ContainerStarted","Data":"3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf"} Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.553427 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d","Type":"ContainerStarted","Data":"fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97"} Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.553437 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d","Type":"ContainerStarted","Data":"7fd595bc043103fe00c3d0da07f7bd17ded8f3700eb7385b1ecb551158ee6ac7"} Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.573833 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.573814664 podStartE2EDuration="2.573814664s" podCreationTimestamp="2026-02-27 17:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:55.568982978 +0000 UTC m=+1534.084780565" watchObservedRunningTime="2026-02-27 17:18:55.573814664 +0000 UTC m=+1534.089612251" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.586754 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.5867358769999997 podStartE2EDuration="2.586735877s" podCreationTimestamp="2026-02-27 17:18:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:18:55.585749869 +0000 UTC m=+1534.101547456" watchObservedRunningTime="2026-02-27 17:18:55.586735877 +0000 UTC m=+1534.102533464" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.615548 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.650943 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.662892 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:55 crc kubenswrapper[4708]: E0227 17:18:55.663446 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-notification-agent" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663471 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-notification-agent" Feb 27 17:18:55 crc kubenswrapper[4708]: E0227 17:18:55.663490 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="proxy-httpd" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663499 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="proxy-httpd" Feb 27 17:18:55 crc kubenswrapper[4708]: E0227 17:18:55.663523 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-central-agent" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663531 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-central-agent" Feb 27 17:18:55 crc kubenswrapper[4708]: E0227 17:18:55.663540 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="sg-core" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663547 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="sg-core" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663795 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-notification-agent" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663821 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="proxy-httpd" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663837 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="ceilometer-central-agent" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.663882 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" containerName="sg-core" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.666228 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.668878 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.669802 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.681288 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.743171 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-run-httpd\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.743528 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-log-httpd\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.743620 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.743716 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-config-data\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.743833 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.743935 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-scripts\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.743999 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-489pq\" (UniqueName: \"kubernetes.io/projected/bc4d87e6-9480-4dea-9771-4d11a34d8a25-kube-api-access-489pq\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.846097 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-run-httpd\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.846301 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-log-httpd\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.846344 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.846381 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-config-data\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.846427 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.846469 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-scripts\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.846512 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-489pq\" (UniqueName: \"kubernetes.io/projected/bc4d87e6-9480-4dea-9771-4d11a34d8a25-kube-api-access-489pq\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.847494 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-log-httpd\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.848170 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-run-httpd\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.852158 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.853739 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.854449 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-config-data\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.857478 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-scripts\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.865066 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-489pq\" (UniqueName: \"kubernetes.io/projected/bc4d87e6-9480-4dea-9771-4d11a34d8a25-kube-api-access-489pq\") pod \"ceilometer-0\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " pod="openstack/ceilometer-0" Feb 27 17:18:55 crc kubenswrapper[4708]: I0227 17:18:55.982704 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:18:56 crc kubenswrapper[4708]: I0227 17:18:56.074098 4708 scope.go:117] "RemoveContainer" containerID="b87cf45d3c166fc49119601a72fbee45c14fadffa35e3f34c1ac439e5db92d82" Feb 27 17:18:56 crc kubenswrapper[4708]: I0227 17:18:56.242020 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d911ddc-1b45-4338-a04b-bd45fa68c6b3" path="/var/lib/kubelet/pods/9d911ddc-1b45-4338-a04b-bd45fa68c6b3/volumes" Feb 27 17:18:56 crc kubenswrapper[4708]: I0227 17:18:56.544349 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:18:56 crc kubenswrapper[4708]: E0227 17:18:56.583715 4708 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:18:56 crc kubenswrapper[4708]: E0227 17:18:56.610982 4708 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:18:56 crc kubenswrapper[4708]: W0227 17:18:56.611170 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbc4d87e6_9480_4dea_9771_4d11a34d8a25.slice/crio-dfee5934f877f49c8ddc20fde63c294518315ea7b07881d55fff90a3abbe8fc8 WatchSource:0}: Error finding container dfee5934f877f49c8ddc20fde63c294518315ea7b07881d55fff90a3abbe8fc8: Status 404 returned error can't find the container with id dfee5934f877f49c8ddc20fde63c294518315ea7b07881d55fff90a3abbe8fc8 Feb 27 17:18:56 crc kubenswrapper[4708]: E0227 17:18:56.614934 4708 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:18:56 crc kubenswrapper[4708]: E0227 17:18:56.615001 4708 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="3578e95b-5d98-4904-80fe-4991f9079b45" containerName="nova-scheduler-scheduler" Feb 27 17:18:57 crc kubenswrapper[4708]: I0227 17:18:57.596423 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerStarted","Data":"aa05ef778bded2ddaebd4f228583ec63fa5fba9c7441abc60462959f07c05f2b"} Feb 27 17:18:57 crc kubenswrapper[4708]: I0227 17:18:57.597077 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerStarted","Data":"dfee5934f877f49c8ddc20fde63c294518315ea7b07881d55fff90a3abbe8fc8"} Feb 27 17:18:58 crc kubenswrapper[4708]: E0227 17:18:58.119481 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c2b60e1_da8d_4c37_8a9b_fbf471e1eeab.slice/crio-conmon-8b8ddac9c35a192fa3d28063f08a0c7b300e04795e1b185c1c354c4aaf512a9b.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7c2b60e1_da8d_4c37_8a9b_fbf471e1eeab.slice/crio-8b8ddac9c35a192fa3d28063f08a0c7b300e04795e1b185c1c354c4aaf512a9b.scope\": RecentStats: unable to find data in memory cache]" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.381897 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.502206 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-config-data\") pod \"3578e95b-5d98-4904-80fe-4991f9079b45\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.502291 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-combined-ca-bundle\") pod \"3578e95b-5d98-4904-80fe-4991f9079b45\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.502412 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx5j6\" (UniqueName: \"kubernetes.io/projected/3578e95b-5d98-4904-80fe-4991f9079b45-kube-api-access-nx5j6\") pod \"3578e95b-5d98-4904-80fe-4991f9079b45\" (UID: \"3578e95b-5d98-4904-80fe-4991f9079b45\") " Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.507190 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3578e95b-5d98-4904-80fe-4991f9079b45-kube-api-access-nx5j6" (OuterVolumeSpecName: "kube-api-access-nx5j6") pod "3578e95b-5d98-4904-80fe-4991f9079b45" (UID: "3578e95b-5d98-4904-80fe-4991f9079b45"). InnerVolumeSpecName "kube-api-access-nx5j6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.537825 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3578e95b-5d98-4904-80fe-4991f9079b45" (UID: "3578e95b-5d98-4904-80fe-4991f9079b45"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.544958 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-config-data" (OuterVolumeSpecName: "config-data") pod "3578e95b-5d98-4904-80fe-4991f9079b45" (UID: "3578e95b-5d98-4904-80fe-4991f9079b45"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.604247 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx5j6\" (UniqueName: \"kubernetes.io/projected/3578e95b-5d98-4904-80fe-4991f9079b45-kube-api-access-nx5j6\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.604277 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.604287 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3578e95b-5d98-4904-80fe-4991f9079b45-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.606706 4708 generic.go:334] "Generic (PLEG): container finished" podID="3578e95b-5d98-4904-80fe-4991f9079b45" containerID="d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" exitCode=0 Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.606752 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.606783 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3578e95b-5d98-4904-80fe-4991f9079b45","Type":"ContainerDied","Data":"d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2"} Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.607485 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"3578e95b-5d98-4904-80fe-4991f9079b45","Type":"ContainerDied","Data":"c8a10062d72c38a90a7b94f9116deb8b79aa8e4b92991b4a221188c13870990c"} Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.607507 4708 scope.go:117] "RemoveContainer" containerID="d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.609239 4708 generic.go:334] "Generic (PLEG): container finished" podID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerID="8b8ddac9c35a192fa3d28063f08a0c7b300e04795e1b185c1c354c4aaf512a9b" exitCode=0 Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.609287 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab","Type":"ContainerDied","Data":"8b8ddac9c35a192fa3d28063f08a0c7b300e04795e1b185c1c354c4aaf512a9b"} Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.609315 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab","Type":"ContainerDied","Data":"7adf9eb8783ae6254ad23bf0158cec99729ca6f4718223973b9bb88cb77e8ff1"} Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.609325 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7adf9eb8783ae6254ad23bf0158cec99729ca6f4718223973b9bb88cb77e8ff1" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.611428 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerStarted","Data":"52e438ed8910ed3571b2a3555d61c3e0190f9c9f4b9213fe415d0253d6c69d64"} Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.619327 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.629990 4708 scope.go:117] "RemoveContainer" containerID="d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" Feb 27 17:18:58 crc kubenswrapper[4708]: E0227 17:18:58.630494 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2\": container with ID starting with d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2 not found: ID does not exist" containerID="d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.630553 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2"} err="failed to get container status \"d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2\": rpc error: code = NotFound desc = could not find container \"d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2\": container with ID starting with d45d20a121aa6dbd50b107b70d67c28cfd5adaced775fd6d289009dada36d8f2 not found: ID does not exist" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.685470 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.687745 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.695714 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:58 crc kubenswrapper[4708]: E0227 17:18:58.696178 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-api" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.696195 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-api" Feb 27 17:18:58 crc kubenswrapper[4708]: E0227 17:18:58.696209 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3578e95b-5d98-4904-80fe-4991f9079b45" containerName="nova-scheduler-scheduler" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.696215 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3578e95b-5d98-4904-80fe-4991f9079b45" containerName="nova-scheduler-scheduler" Feb 27 17:18:58 crc kubenswrapper[4708]: E0227 17:18:58.696241 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-log" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.696247 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-log" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.696447 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3578e95b-5d98-4904-80fe-4991f9079b45" containerName="nova-scheduler-scheduler" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.696465 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-log" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.696471 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" containerName="nova-api-api" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.697236 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.701655 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.703426 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.815682 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-277cj\" (UniqueName: \"kubernetes.io/projected/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-kube-api-access-277cj\") pod \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.815854 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-combined-ca-bundle\") pod \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.815896 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-logs\") pod \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.816115 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-config-data\") pod \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\" (UID: \"7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab\") " Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.816743 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cpt94\" (UniqueName: \"kubernetes.io/projected/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-kube-api-access-cpt94\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.816825 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-config-data\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.817004 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.821050 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-kube-api-access-277cj" (OuterVolumeSpecName: "kube-api-access-277cj") pod "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" (UID: "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab"). InnerVolumeSpecName "kube-api-access-277cj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.822662 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-logs" (OuterVolumeSpecName: "logs") pod "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" (UID: "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.862584 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-config-data" (OuterVolumeSpecName: "config-data") pod "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" (UID: "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.874244 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" (UID: "7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.919195 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cpt94\" (UniqueName: \"kubernetes.io/projected/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-kube-api-access-cpt94\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.919275 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-config-data\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.919353 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.919438 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.919461 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-277cj\" (UniqueName: \"kubernetes.io/projected/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-kube-api-access-277cj\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.919474 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.919486 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.923530 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.923984 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-config-data\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:58 crc kubenswrapper[4708]: I0227 17:18:58.936750 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cpt94\" (UniqueName: \"kubernetes.io/projected/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-kube-api-access-cpt94\") pod \"nova-scheduler-0\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " pod="openstack/nova-scheduler-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.015672 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.015897 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.016355 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.628476 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerStarted","Data":"e8a5f0f322c51e72c6f4480e265ae837fbb5d151be4fbb77622e5ae5214fb716"} Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.628747 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.670762 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.686482 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.705914 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.707898 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.710321 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.714600 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.772718 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-logs\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.773023 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-config-data\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.773097 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.773231 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdl8m\" (UniqueName: \"kubernetes.io/projected/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-kube-api-access-fdl8m\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.875054 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdl8m\" (UniqueName: \"kubernetes.io/projected/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-kube-api-access-fdl8m\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.875148 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-logs\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.875253 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-config-data\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.875281 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.875910 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-logs\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.888194 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.890503 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdl8m\" (UniqueName: \"kubernetes.io/projected/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-kube-api-access-fdl8m\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:18:59 crc kubenswrapper[4708]: I0227 17:18:59.890593 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-config-data\") pod \"nova-api-0\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " pod="openstack/nova-api-0" Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.029711 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.260321 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3578e95b-5d98-4904-80fe-4991f9079b45" path="/var/lib/kubelet/pods/3578e95b-5d98-4904-80fe-4991f9079b45/volumes" Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.261921 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab" path="/var/lib/kubelet/pods/7c2b60e1-da8d-4c37-8a9b-fbf471e1eeab/volumes" Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.275218 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:19:00 crc kubenswrapper[4708]: W0227 17:19:00.277254 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podaac9dfe8_f287_48b4_bebb_80f6d4ce57cc.slice/crio-4099b7a26a0ce03bc355ab61834775bd12c14debb0e5a1e961e57f84a797c47a WatchSource:0}: Error finding container 4099b7a26a0ce03bc355ab61834775bd12c14debb0e5a1e961e57f84a797c47a: Status 404 returned error can't find the container with id 4099b7a26a0ce03bc355ab61834775bd12c14debb0e5a1e961e57f84a797c47a Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.576009 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.641593 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc","Type":"ContainerStarted","Data":"acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c"} Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.641888 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc","Type":"ContainerStarted","Data":"4099b7a26a0ce03bc355ab61834775bd12c14debb0e5a1e961e57f84a797c47a"} Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.643488 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e0b7572-01b6-4117-b4fb-2d6d24db7f86","Type":"ContainerStarted","Data":"26fa052e9ec81cd5df46e07d6ec10f81cdbbc6dbdc372499bacd98487e37766b"} Feb 27 17:19:00 crc kubenswrapper[4708]: I0227 17:19:00.659131 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.6591120200000002 podStartE2EDuration="2.65911202s" podCreationTimestamp="2026-02-27 17:18:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:00.657038602 +0000 UTC m=+1539.172836189" watchObservedRunningTime="2026-02-27 17:19:00.65911202 +0000 UTC m=+1539.174909617" Feb 27 17:19:01 crc kubenswrapper[4708]: I0227 17:19:01.657104 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e0b7572-01b6-4117-b4fb-2d6d24db7f86","Type":"ContainerStarted","Data":"3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113"} Feb 27 17:19:01 crc kubenswrapper[4708]: I0227 17:19:01.657148 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e0b7572-01b6-4117-b4fb-2d6d24db7f86","Type":"ContainerStarted","Data":"c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2"} Feb 27 17:19:01 crc kubenswrapper[4708]: I0227 17:19:01.661783 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerStarted","Data":"eb19d4458b24f8577944ac3b3f3eaaa24bde083ffaf1e0bc046e5fc23c380e69"} Feb 27 17:19:01 crc kubenswrapper[4708]: I0227 17:19:01.662080 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:19:01 crc kubenswrapper[4708]: I0227 17:19:01.679309 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.679292512 podStartE2EDuration="2.679292512s" podCreationTimestamp="2026-02-27 17:18:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:01.675531096 +0000 UTC m=+1540.191328703" watchObservedRunningTime="2026-02-27 17:19:01.679292512 +0000 UTC m=+1540.195090099" Feb 27 17:19:01 crc kubenswrapper[4708]: I0227 17:19:01.713723 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.661440597 podStartE2EDuration="6.713695399s" podCreationTimestamp="2026-02-27 17:18:55 +0000 UTC" firstStartedPulling="2026-02-27 17:18:56.616466228 +0000 UTC m=+1535.132263815" lastFinishedPulling="2026-02-27 17:19:00.66872102 +0000 UTC m=+1539.184518617" observedRunningTime="2026-02-27 17:19:01.69593753 +0000 UTC m=+1540.211735147" watchObservedRunningTime="2026-02-27 17:19:01.713695399 +0000 UTC m=+1540.229493026" Feb 27 17:19:04 crc kubenswrapper[4708]: I0227 17:19:04.005957 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Feb 27 17:19:04 crc kubenswrapper[4708]: I0227 17:19:04.015659 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:19:04 crc kubenswrapper[4708]: I0227 17:19:04.015731 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:19:04 crc kubenswrapper[4708]: I0227 17:19:04.016474 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 17:19:05 crc kubenswrapper[4708]: I0227 17:19:05.028055 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:05 crc kubenswrapper[4708]: I0227 17:19:05.028073 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:05 crc kubenswrapper[4708]: I0227 17:19:05.631231 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:19:05 crc kubenswrapper[4708]: I0227 17:19:05.631313 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:19:09 crc kubenswrapper[4708]: I0227 17:19:09.017686 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 17:19:09 crc kubenswrapper[4708]: I0227 17:19:09.079073 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 17:19:09 crc kubenswrapper[4708]: I0227 17:19:09.826883 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 17:19:10 crc kubenswrapper[4708]: I0227 17:19:10.033026 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:19:10 crc kubenswrapper[4708]: I0227 17:19:10.034613 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:19:11 crc kubenswrapper[4708]: I0227 17:19:11.113013 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.228:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:11 crc kubenswrapper[4708]: I0227 17:19:11.113142 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.228:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:14 crc kubenswrapper[4708]: I0227 17:19:14.025274 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:19:14 crc kubenswrapper[4708]: I0227 17:19:14.028201 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:19:14 crc kubenswrapper[4708]: I0227 17:19:14.035708 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:19:14 crc kubenswrapper[4708]: I0227 17:19:14.861506 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:19:16 crc kubenswrapper[4708]: I0227 17:19:16.876653 4708 generic.go:334] "Generic (PLEG): container finished" podID="e79c3134-0ea1-4730-9fc0-74f3b91d5fae" containerID="b53ac37eb796c474a372bb2fb0eb15a25c785f9bf4f55d3d8bee5ec2e99f6e62" exitCode=137 Feb 27 17:19:16 crc kubenswrapper[4708]: I0227 17:19:16.876785 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e79c3134-0ea1-4730-9fc0-74f3b91d5fae","Type":"ContainerDied","Data":"b53ac37eb796c474a372bb2fb0eb15a25c785f9bf4f55d3d8bee5ec2e99f6e62"} Feb 27 17:19:16 crc kubenswrapper[4708]: I0227 17:19:16.877020 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"e79c3134-0ea1-4730-9fc0-74f3b91d5fae","Type":"ContainerDied","Data":"2e5b42b4eecf8a73ba377d71028caf57766d47481efe555bdc7a08f996b66244"} Feb 27 17:19:16 crc kubenswrapper[4708]: I0227 17:19:16.877043 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2e5b42b4eecf8a73ba377d71028caf57766d47481efe555bdc7a08f996b66244" Feb 27 17:19:16 crc kubenswrapper[4708]: I0227 17:19:16.939402 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.076635 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-combined-ca-bundle\") pod \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.076881 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-config-data\") pod \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.077206 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzr24\" (UniqueName: \"kubernetes.io/projected/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-kube-api-access-wzr24\") pod \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\" (UID: \"e79c3134-0ea1-4730-9fc0-74f3b91d5fae\") " Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.086095 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-kube-api-access-wzr24" (OuterVolumeSpecName: "kube-api-access-wzr24") pod "e79c3134-0ea1-4730-9fc0-74f3b91d5fae" (UID: "e79c3134-0ea1-4730-9fc0-74f3b91d5fae"). InnerVolumeSpecName "kube-api-access-wzr24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.119750 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e79c3134-0ea1-4730-9fc0-74f3b91d5fae" (UID: "e79c3134-0ea1-4730-9fc0-74f3b91d5fae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.132445 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-config-data" (OuterVolumeSpecName: "config-data") pod "e79c3134-0ea1-4730-9fc0-74f3b91d5fae" (UID: "e79c3134-0ea1-4730-9fc0-74f3b91d5fae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.183992 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.184030 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.184041 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzr24\" (UniqueName: \"kubernetes.io/projected/e79c3134-0ea1-4730-9fc0-74f3b91d5fae-kube-api-access-wzr24\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.891103 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.952506 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.972918 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.992624 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:19:17 crc kubenswrapper[4708]: E0227 17:19:17.993615 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e79c3134-0ea1-4730-9fc0-74f3b91d5fae" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.993648 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e79c3134-0ea1-4730-9fc0-74f3b91d5fae" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.995386 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e79c3134-0ea1-4730-9fc0-74f3b91d5fae" containerName="nova-cell1-novncproxy-novncproxy" Feb 27 17:19:17 crc kubenswrapper[4708]: I0227 17:19:17.996618 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.007302 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.013587 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.014127 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.014476 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.110471 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.110563 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.110633 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.110702 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.110792 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtsc8\" (UniqueName: \"kubernetes.io/projected/f6aea8fe-6682-4d69-90d7-173b5d089d5f-kube-api-access-dtsc8\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.211995 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtsc8\" (UniqueName: \"kubernetes.io/projected/f6aea8fe-6682-4d69-90d7-173b5d089d5f-kube-api-access-dtsc8\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.212096 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.212137 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.212164 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.212220 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.219689 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.220138 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.220263 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.226531 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/f6aea8fe-6682-4d69-90d7-173b5d089d5f-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.233581 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtsc8\" (UniqueName: \"kubernetes.io/projected/f6aea8fe-6682-4d69-90d7-173b5d089d5f-kube-api-access-dtsc8\") pod \"nova-cell1-novncproxy-0\" (UID: \"f6aea8fe-6682-4d69-90d7-173b5d089d5f\") " pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.243694 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e79c3134-0ea1-4730-9fc0-74f3b91d5fae" path="/var/lib/kubelet/pods/e79c3134-0ea1-4730-9fc0-74f3b91d5fae/volumes" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.325445 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:18 crc kubenswrapper[4708]: I0227 17:19:18.911206 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Feb 27 17:19:19 crc kubenswrapper[4708]: I0227 17:19:19.922252 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6aea8fe-6682-4d69-90d7-173b5d089d5f","Type":"ContainerStarted","Data":"e65aba2b8408711fd44d48605404710fe8e5f76a6c0b209bcd6a5ce21f0d0fa9"} Feb 27 17:19:19 crc kubenswrapper[4708]: I0227 17:19:19.922819 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"f6aea8fe-6682-4d69-90d7-173b5d089d5f","Type":"ContainerStarted","Data":"e40d38b01bdfb936a56a4d0b740a5a4fa320146054ba13d84600a4e1eec586e3"} Feb 27 17:19:20 crc kubenswrapper[4708]: I0227 17:19:20.037229 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:19:20 crc kubenswrapper[4708]: I0227 17:19:20.039359 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:19:20 crc kubenswrapper[4708]: I0227 17:19:20.039719 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:19:20 crc kubenswrapper[4708]: I0227 17:19:20.042448 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:19:20 crc kubenswrapper[4708]: I0227 17:19:20.076252 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=3.076222808 podStartE2EDuration="3.076222808s" podCreationTimestamp="2026-02-27 17:19:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:19.950259869 +0000 UTC m=+1558.466057466" watchObservedRunningTime="2026-02-27 17:19:20.076222808 +0000 UTC m=+1558.592020425" Feb 27 17:19:20 crc kubenswrapper[4708]: I0227 17:19:20.930373 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:19:20 crc kubenswrapper[4708]: I0227 17:19:20.934697 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.159536 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-cvtwx"] Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.161649 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.174322 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-cvtwx"] Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.203508 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.203597 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.203646 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqzxq\" (UniqueName: \"kubernetes.io/projected/52953be0-5d65-4612-999f-0c6740c4909b-kube-api-access-fqzxq\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.203760 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.203837 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.203882 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-config\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.304736 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-config\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.304806 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.304875 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.304919 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqzxq\" (UniqueName: \"kubernetes.io/projected/52953be0-5d65-4612-999f-0c6740c4909b-kube-api-access-fqzxq\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.304960 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.305019 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.305727 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-svc\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.305733 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-config\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.305743 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-sb\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.305864 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-nb\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.305970 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-swift-storage-0\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.326443 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqzxq\" (UniqueName: \"kubernetes.io/projected/52953be0-5d65-4612-999f-0c6740c4909b-kube-api-access-fqzxq\") pod \"dnsmasq-dns-5fd9b586ff-cvtwx\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:21 crc kubenswrapper[4708]: I0227 17:19:21.477590 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:22 crc kubenswrapper[4708]: I0227 17:19:22.148799 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-cvtwx"] Feb 27 17:19:22 crc kubenswrapper[4708]: I0227 17:19:22.974886 4708 generic.go:334] "Generic (PLEG): container finished" podID="52953be0-5d65-4612-999f-0c6740c4909b" containerID="9c310302045200ba2d4bbb4242ab6731de7f7320aa2d17b62909fbff28e0c472" exitCode=0 Feb 27 17:19:22 crc kubenswrapper[4708]: I0227 17:19:22.975190 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" event={"ID":"52953be0-5d65-4612-999f-0c6740c4909b","Type":"ContainerDied","Data":"9c310302045200ba2d4bbb4242ab6731de7f7320aa2d17b62909fbff28e0c472"} Feb 27 17:19:22 crc kubenswrapper[4708]: I0227 17:19:22.975307 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" event={"ID":"52953be0-5d65-4612-999f-0c6740c4909b","Type":"ContainerStarted","Data":"1b557e5aa8d8d09c3d1586aa4845cef96026dc8203342f41b786ed69590aa3f4"} Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.325827 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.407502 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.407919 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="proxy-httpd" containerID="cri-o://eb19d4458b24f8577944ac3b3f3eaaa24bde083ffaf1e0bc046e5fc23c380e69" gracePeriod=30 Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.407836 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-central-agent" containerID="cri-o://aa05ef778bded2ddaebd4f228583ec63fa5fba9c7441abc60462959f07c05f2b" gracePeriod=30 Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.407986 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-notification-agent" containerID="cri-o://52e438ed8910ed3571b2a3555d61c3e0190f9c9f4b9213fe415d0253d6c69d64" gracePeriod=30 Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.407929 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="sg-core" containerID="cri-o://e8a5f0f322c51e72c6f4480e265ae837fbb5d151be4fbb77622e5ae5214fb716" gracePeriod=30 Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.414699 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.226:3000/\": EOF" Feb 27 17:19:23 crc kubenswrapper[4708]: I0227 17:19:23.581734 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.027828 4708 generic.go:334] "Generic (PLEG): container finished" podID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerID="eb19d4458b24f8577944ac3b3f3eaaa24bde083ffaf1e0bc046e5fc23c380e69" exitCode=0 Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.027877 4708 generic.go:334] "Generic (PLEG): container finished" podID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerID="e8a5f0f322c51e72c6f4480e265ae837fbb5d151be4fbb77622e5ae5214fb716" exitCode=2 Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.027886 4708 generic.go:334] "Generic (PLEG): container finished" podID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerID="aa05ef778bded2ddaebd4f228583ec63fa5fba9c7441abc60462959f07c05f2b" exitCode=0 Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.027939 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerDied","Data":"eb19d4458b24f8577944ac3b3f3eaaa24bde083ffaf1e0bc046e5fc23c380e69"} Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.027968 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerDied","Data":"e8a5f0f322c51e72c6f4480e265ae837fbb5d151be4fbb77622e5ae5214fb716"} Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.027978 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerDied","Data":"aa05ef778bded2ddaebd4f228583ec63fa5fba9c7441abc60462959f07c05f2b"} Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.032694 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-log" containerID="cri-o://c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2" gracePeriod=30 Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.033792 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" event={"ID":"52953be0-5d65-4612-999f-0c6740c4909b","Type":"ContainerStarted","Data":"6a6b4f087880920a43b080acf9aca6912b7e056558aac6b19bb7deb3e9206bf5"} Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.033823 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.034126 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-api" containerID="cri-o://3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113" gracePeriod=30 Feb 27 17:19:24 crc kubenswrapper[4708]: I0227 17:19:24.068995 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" podStartSLOduration=3.068977308 podStartE2EDuration="3.068977308s" podCreationTimestamp="2026-02-27 17:19:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:24.068479264 +0000 UTC m=+1562.584276851" watchObservedRunningTime="2026-02-27 17:19:24.068977308 +0000 UTC m=+1562.584774895" Feb 27 17:19:25 crc kubenswrapper[4708]: I0227 17:19:25.042595 4708 generic.go:334] "Generic (PLEG): container finished" podID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerID="c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2" exitCode=143 Feb 27 17:19:25 crc kubenswrapper[4708]: I0227 17:19:25.042683 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e0b7572-01b6-4117-b4fb-2d6d24db7f86","Type":"ContainerDied","Data":"c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2"} Feb 27 17:19:25 crc kubenswrapper[4708]: I0227 17:19:25.983444 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.226:3000/\": dial tcp 10.217.0.226:3000: connect: connection refused" Feb 27 17:19:27 crc kubenswrapper[4708]: I0227 17:19:27.888597 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.049779 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-combined-ca-bundle\") pod \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.049904 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-logs\") pod \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.050060 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-config-data\") pod \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.050097 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdl8m\" (UniqueName: \"kubernetes.io/projected/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-kube-api-access-fdl8m\") pod \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\" (UID: \"1e0b7572-01b6-4117-b4fb-2d6d24db7f86\") " Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.051049 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-logs" (OuterVolumeSpecName: "logs") pod "1e0b7572-01b6-4117-b4fb-2d6d24db7f86" (UID: "1e0b7572-01b6-4117-b4fb-2d6d24db7f86"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.069124 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-kube-api-access-fdl8m" (OuterVolumeSpecName: "kube-api-access-fdl8m") pod "1e0b7572-01b6-4117-b4fb-2d6d24db7f86" (UID: "1e0b7572-01b6-4117-b4fb-2d6d24db7f86"). InnerVolumeSpecName "kube-api-access-fdl8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.078386 4708 generic.go:334] "Generic (PLEG): container finished" podID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerID="3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113" exitCode=0 Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.078450 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.078469 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e0b7572-01b6-4117-b4fb-2d6d24db7f86","Type":"ContainerDied","Data":"3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113"} Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.079289 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1e0b7572-01b6-4117-b4fb-2d6d24db7f86","Type":"ContainerDied","Data":"26fa052e9ec81cd5df46e07d6ec10f81cdbbc6dbdc372499bacd98487e37766b"} Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.079328 4708 scope.go:117] "RemoveContainer" containerID="3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.080038 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-config-data" (OuterVolumeSpecName: "config-data") pod "1e0b7572-01b6-4117-b4fb-2d6d24db7f86" (UID: "1e0b7572-01b6-4117-b4fb-2d6d24db7f86"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.101214 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1e0b7572-01b6-4117-b4fb-2d6d24db7f86" (UID: "1e0b7572-01b6-4117-b4fb-2d6d24db7f86"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.152166 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.152204 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdl8m\" (UniqueName: \"kubernetes.io/projected/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-kube-api-access-fdl8m\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.152216 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.152225 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1e0b7572-01b6-4117-b4fb-2d6d24db7f86-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.160345 4708 scope.go:117] "RemoveContainer" containerID="c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.180792 4708 scope.go:117] "RemoveContainer" containerID="3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113" Feb 27 17:19:28 crc kubenswrapper[4708]: E0227 17:19:28.181306 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113\": container with ID starting with 3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113 not found: ID does not exist" containerID="3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.181348 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113"} err="failed to get container status \"3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113\": rpc error: code = NotFound desc = could not find container \"3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113\": container with ID starting with 3179a75bfcf1ad2560e033ec66e99456cd2e2164e34fe822bdee116228e15113 not found: ID does not exist" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.181374 4708 scope.go:117] "RemoveContainer" containerID="c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2" Feb 27 17:19:28 crc kubenswrapper[4708]: E0227 17:19:28.183754 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2\": container with ID starting with c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2 not found: ID does not exist" containerID="c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.183797 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2"} err="failed to get container status \"c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2\": rpc error: code = NotFound desc = could not find container \"c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2\": container with ID starting with c956f141bae420517a270af84fe6d12651b411918de9521466f40b1963e5c8a2 not found: ID does not exist" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.326425 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.352997 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.408363 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.417731 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.438003 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:28 crc kubenswrapper[4708]: E0227 17:19:28.438550 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-log" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.438572 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-log" Feb 27 17:19:28 crc kubenswrapper[4708]: E0227 17:19:28.438620 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-api" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.438630 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-api" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.438891 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-log" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.438917 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" containerName="nova-api-api" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.440171 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.446160 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.446351 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.446615 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.455888 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.562981 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.563059 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr5lj\" (UniqueName: \"kubernetes.io/projected/a4c7fb6c-80fc-404b-883c-10da2cea06d6-kube-api-access-vr5lj\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.563095 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-public-tls-certs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.563141 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-config-data\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.563232 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.563270 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7fb6c-80fc-404b-883c-10da2cea06d6-logs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.664817 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7fb6c-80fc-404b-883c-10da2cea06d6-logs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.665026 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.665077 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr5lj\" (UniqueName: \"kubernetes.io/projected/a4c7fb6c-80fc-404b-883c-10da2cea06d6-kube-api-access-vr5lj\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.665116 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-public-tls-certs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.665157 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-config-data\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.665215 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.665358 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7fb6c-80fc-404b-883c-10da2cea06d6-logs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.669733 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-config-data\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.670661 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.670858 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-public-tls-certs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.681126 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-internal-tls-certs\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.695812 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr5lj\" (UniqueName: \"kubernetes.io/projected/a4c7fb6c-80fc-404b-883c-10da2cea06d6-kube-api-access-vr5lj\") pod \"nova-api-0\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " pod="openstack/nova-api-0" Feb 27 17:19:28 crc kubenswrapper[4708]: I0227 17:19:28.778533 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.101981 4708 generic.go:334] "Generic (PLEG): container finished" podID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerID="52e438ed8910ed3571b2a3555d61c3e0190f9c9f4b9213fe415d0253d6c69d64" exitCode=0 Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.102244 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerDied","Data":"52e438ed8910ed3571b2a3555d61c3e0190f9c9f4b9213fe415d0253d6c69d64"} Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.121308 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.277194 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-szwb6"] Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.278894 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.280631 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.282304 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.291572 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-szwb6"] Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.305991 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.400497 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-combined-ca-bundle\") pod \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.400638 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-run-httpd\") pod \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.400707 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-log-httpd\") pod \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.400779 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-scripts\") pod \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.400807 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-489pq\" (UniqueName: \"kubernetes.io/projected/bc4d87e6-9480-4dea-9771-4d11a34d8a25-kube-api-access-489pq\") pod \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.400825 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-sg-core-conf-yaml\") pod \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.400883 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-config-data\") pod \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\" (UID: \"bc4d87e6-9480-4dea-9771-4d11a34d8a25\") " Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401067 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bc4d87e6-9480-4dea-9771-4d11a34d8a25" (UID: "bc4d87e6-9480-4dea-9771-4d11a34d8a25"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401244 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-scripts\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401292 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2nv\" (UniqueName: \"kubernetes.io/projected/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-kube-api-access-cm2nv\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401317 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401404 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-config-data\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401423 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bc4d87e6-9480-4dea-9771-4d11a34d8a25" (UID: "bc4d87e6-9480-4dea-9771-4d11a34d8a25"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401519 4708 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.401531 4708 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bc4d87e6-9480-4dea-9771-4d11a34d8a25-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.408157 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc4d87e6-9480-4dea-9771-4d11a34d8a25-kube-api-access-489pq" (OuterVolumeSpecName: "kube-api-access-489pq") pod "bc4d87e6-9480-4dea-9771-4d11a34d8a25" (UID: "bc4d87e6-9480-4dea-9771-4d11a34d8a25"). InnerVolumeSpecName "kube-api-access-489pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.411068 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-scripts" (OuterVolumeSpecName: "scripts") pod "bc4d87e6-9480-4dea-9771-4d11a34d8a25" (UID: "bc4d87e6-9480-4dea-9771-4d11a34d8a25"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.449152 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bc4d87e6-9480-4dea-9771-4d11a34d8a25" (UID: "bc4d87e6-9480-4dea-9771-4d11a34d8a25"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.503161 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-config-data\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.503298 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-scripts\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.503330 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2nv\" (UniqueName: \"kubernetes.io/projected/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-kube-api-access-cm2nv\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.503347 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.503455 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.503467 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-489pq\" (UniqueName: \"kubernetes.io/projected/bc4d87e6-9480-4dea-9771-4d11a34d8a25-kube-api-access-489pq\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.503477 4708 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.511634 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-config-data\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.512412 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-scripts\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.517050 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.548408 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2nv\" (UniqueName: \"kubernetes.io/projected/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-kube-api-access-cm2nv\") pod \"nova-cell1-cell-mapping-szwb6\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.550098 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.620354 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.623434 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bc4d87e6-9480-4dea-9771-4d11a34d8a25" (UID: "bc4d87e6-9480-4dea-9771-4d11a34d8a25"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.653615 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-config-data" (OuterVolumeSpecName: "config-data") pod "bc4d87e6-9480-4dea-9771-4d11a34d8a25" (UID: "bc4d87e6-9480-4dea-9771-4d11a34d8a25"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.709514 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:29 crc kubenswrapper[4708]: I0227 17:19:29.709550 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bc4d87e6-9480-4dea-9771-4d11a34d8a25-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.091633 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-szwb6"] Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.122105 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.122111 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bc4d87e6-9480-4dea-9771-4d11a34d8a25","Type":"ContainerDied","Data":"dfee5934f877f49c8ddc20fde63c294518315ea7b07881d55fff90a3abbe8fc8"} Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.122651 4708 scope.go:117] "RemoveContainer" containerID="eb19d4458b24f8577944ac3b3f3eaaa24bde083ffaf1e0bc046e5fc23c380e69" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.136152 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4c7fb6c-80fc-404b-883c-10da2cea06d6","Type":"ContainerStarted","Data":"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7"} Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.136211 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4c7fb6c-80fc-404b-883c-10da2cea06d6","Type":"ContainerStarted","Data":"92a7cc9d6e627b0a907313a42fb5338f576650d0e6188ea4f951615fbc9a46ed"} Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.147463 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-szwb6" event={"ID":"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1","Type":"ContainerStarted","Data":"dfb50dc02d43cf7dc14f3199328762727711d42424ac33821623faa9d61843c3"} Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.156274 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.165947 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.184704 4708 scope.go:117] "RemoveContainer" containerID="e8a5f0f322c51e72c6f4480e265ae837fbb5d151be4fbb77622e5ae5214fb716" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.195411 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:19:30 crc kubenswrapper[4708]: E0227 17:19:30.195946 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-central-agent" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.196008 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-central-agent" Feb 27 17:19:30 crc kubenswrapper[4708]: E0227 17:19:30.196071 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="sg-core" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.196127 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="sg-core" Feb 27 17:19:30 crc kubenswrapper[4708]: E0227 17:19:30.196212 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-notification-agent" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.196293 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-notification-agent" Feb 27 17:19:30 crc kubenswrapper[4708]: E0227 17:19:30.196416 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="proxy-httpd" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.196465 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="proxy-httpd" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.198066 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-central-agent" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.198148 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="proxy-httpd" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.198209 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="ceilometer-notification-agent" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.198261 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" containerName="sg-core" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.200248 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.202647 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.202892 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.205535 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.234355 4708 scope.go:117] "RemoveContainer" containerID="52e438ed8910ed3571b2a3555d61c3e0190f9c9f4b9213fe415d0253d6c69d64" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.251832 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e0b7572-01b6-4117-b4fb-2d6d24db7f86" path="/var/lib/kubelet/pods/1e0b7572-01b6-4117-b4fb-2d6d24db7f86/volumes" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.253457 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc4d87e6-9480-4dea-9771-4d11a34d8a25" path="/var/lib/kubelet/pods/bc4d87e6-9480-4dea-9771-4d11a34d8a25/volumes" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.271668 4708 scope.go:117] "RemoveContainer" containerID="aa05ef778bded2ddaebd4f228583ec63fa5fba9c7441abc60462959f07c05f2b" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.323249 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-scripts\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.323301 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.323352 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-run-httpd\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.323381 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.323397 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7nj8d\" (UniqueName: \"kubernetes.io/projected/2d025830-8db8-4719-8ea5-66f9a27d1d42-kube-api-access-7nj8d\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.323432 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-log-httpd\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.323688 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-config-data\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.425823 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-scripts\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.425894 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.425946 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-run-httpd\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.425977 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.425997 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7nj8d\" (UniqueName: \"kubernetes.io/projected/2d025830-8db8-4719-8ea5-66f9a27d1d42-kube-api-access-7nj8d\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.426033 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-log-httpd\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.426077 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-config-data\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.426510 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-run-httpd\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.426897 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-log-httpd\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.431277 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.431742 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-scripts\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.432239 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-config-data\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.433706 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.441928 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7nj8d\" (UniqueName: \"kubernetes.io/projected/2d025830-8db8-4719-8ea5-66f9a27d1d42-kube-api-access-7nj8d\") pod \"ceilometer-0\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " pod="openstack/ceilometer-0" Feb 27 17:19:30 crc kubenswrapper[4708]: I0227 17:19:30.514480 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.116164 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.132491 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.166761 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerStarted","Data":"60bfc684bcba65ce0271883a9a1f0879f75122151b819d27cfd6f94ec3e873cf"} Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.178875 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4c7fb6c-80fc-404b-883c-10da2cea06d6","Type":"ContainerStarted","Data":"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e"} Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.181286 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-szwb6" event={"ID":"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1","Type":"ContainerStarted","Data":"a6995c6d0a968ffac38663c17c29a199b1455a863c20e7ec885cde8cba392d2c"} Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.202536 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=3.202519191 podStartE2EDuration="3.202519191s" podCreationTimestamp="2026-02-27 17:19:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:31.202198462 +0000 UTC m=+1569.717996049" watchObservedRunningTime="2026-02-27 17:19:31.202519191 +0000 UTC m=+1569.718316778" Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.220181 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-szwb6" podStartSLOduration=2.220161937 podStartE2EDuration="2.220161937s" podCreationTimestamp="2026-02-27 17:19:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:31.217573374 +0000 UTC m=+1569.733370961" watchObservedRunningTime="2026-02-27 17:19:31.220161937 +0000 UTC m=+1569.735959524" Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.479887 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.552204 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-8qqxf"] Feb 27 17:19:31 crc kubenswrapper[4708]: I0227 17:19:31.552483 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" containerName="dnsmasq-dns" containerID="cri-o://a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137" gracePeriod=10 Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.177352 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.194230 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerStarted","Data":"6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20"} Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.196177 4708 generic.go:334] "Generic (PLEG): container finished" podID="e57f23c6-0486-40ad-907d-7776d4d30404" containerID="a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137" exitCode=0 Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.197158 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.197277 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" event={"ID":"e57f23c6-0486-40ad-907d-7776d4d30404","Type":"ContainerDied","Data":"a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137"} Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.197307 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" event={"ID":"e57f23c6-0486-40ad-907d-7776d4d30404","Type":"ContainerDied","Data":"a58d47929b5784688a00fdc5276901520e843d3bb406bd381abf3d1caa0055de"} Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.197330 4708 scope.go:117] "RemoveContainer" containerID="a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.223713 4708 scope.go:117] "RemoveContainer" containerID="406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.262601 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-sb\") pod \"e57f23c6-0486-40ad-907d-7776d4d30404\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.262699 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-config\") pod \"e57f23c6-0486-40ad-907d-7776d4d30404\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.262739 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-nb\") pod \"e57f23c6-0486-40ad-907d-7776d4d30404\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.262759 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-swift-storage-0\") pod \"e57f23c6-0486-40ad-907d-7776d4d30404\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.262890 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-svc\") pod \"e57f23c6-0486-40ad-907d-7776d4d30404\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.262950 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgngj\" (UniqueName: \"kubernetes.io/projected/e57f23c6-0486-40ad-907d-7776d4d30404-kube-api-access-cgngj\") pod \"e57f23c6-0486-40ad-907d-7776d4d30404\" (UID: \"e57f23c6-0486-40ad-907d-7776d4d30404\") " Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.309399 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e57f23c6-0486-40ad-907d-7776d4d30404-kube-api-access-cgngj" (OuterVolumeSpecName: "kube-api-access-cgngj") pod "e57f23c6-0486-40ad-907d-7776d4d30404" (UID: "e57f23c6-0486-40ad-907d-7776d4d30404"). InnerVolumeSpecName "kube-api-access-cgngj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.315011 4708 scope.go:117] "RemoveContainer" containerID="a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137" Feb 27 17:19:32 crc kubenswrapper[4708]: E0227 17:19:32.323086 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137\": container with ID starting with a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137 not found: ID does not exist" containerID="a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.323126 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137"} err="failed to get container status \"a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137\": rpc error: code = NotFound desc = could not find container \"a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137\": container with ID starting with a032eb5594b425d42be3bffb13103db3c9dd59a781ad87e9e0150e3c52481137 not found: ID does not exist" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.323152 4708 scope.go:117] "RemoveContainer" containerID="406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28" Feb 27 17:19:32 crc kubenswrapper[4708]: E0227 17:19:32.324294 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28\": container with ID starting with 406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28 not found: ID does not exist" containerID="406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.324323 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28"} err="failed to get container status \"406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28\": rpc error: code = NotFound desc = could not find container \"406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28\": container with ID starting with 406947576be574d7b807980769b10a6645cd4da3a207d710c8d545e2698b9d28 not found: ID does not exist" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.372192 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgngj\" (UniqueName: \"kubernetes.io/projected/e57f23c6-0486-40ad-907d-7776d4d30404-kube-api-access-cgngj\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.402565 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "e57f23c6-0486-40ad-907d-7776d4d30404" (UID: "e57f23c6-0486-40ad-907d-7776d4d30404"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.405521 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-config" (OuterVolumeSpecName: "config") pod "e57f23c6-0486-40ad-907d-7776d4d30404" (UID: "e57f23c6-0486-40ad-907d-7776d4d30404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.445354 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "e57f23c6-0486-40ad-907d-7776d4d30404" (UID: "e57f23c6-0486-40ad-907d-7776d4d30404"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.446452 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "e57f23c6-0486-40ad-907d-7776d4d30404" (UID: "e57f23c6-0486-40ad-907d-7776d4d30404"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.450979 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "e57f23c6-0486-40ad-907d-7776d4d30404" (UID: "e57f23c6-0486-40ad-907d-7776d4d30404"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.473718 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.473742 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.473751 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.473759 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.473767 4708 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/e57f23c6-0486-40ad-907d-7776d4d30404-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.554268 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-8qqxf"] Feb 27 17:19:32 crc kubenswrapper[4708]: I0227 17:19:32.565135 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78cd565959-8qqxf"] Feb 27 17:19:33 crc kubenswrapper[4708]: I0227 17:19:33.227028 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerStarted","Data":"b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283"} Feb 27 17:19:34 crc kubenswrapper[4708]: I0227 17:19:34.300282 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" path="/var/lib/kubelet/pods/e57f23c6-0486-40ad-907d-7776d4d30404/volumes" Feb 27 17:19:34 crc kubenswrapper[4708]: I0227 17:19:34.304570 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerStarted","Data":"9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898"} Feb 27 17:19:35 crc kubenswrapper[4708]: I0227 17:19:35.323568 4708 generic.go:334] "Generic (PLEG): container finished" podID="a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" containerID="a6995c6d0a968ffac38663c17c29a199b1455a863c20e7ec885cde8cba392d2c" exitCode=0 Feb 27 17:19:35 crc kubenswrapper[4708]: I0227 17:19:35.323685 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-szwb6" event={"ID":"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1","Type":"ContainerDied","Data":"a6995c6d0a968ffac38663c17c29a199b1455a863c20e7ec885cde8cba392d2c"} Feb 27 17:19:35 crc kubenswrapper[4708]: I0227 17:19:35.631462 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:19:35 crc kubenswrapper[4708]: I0227 17:19:35.631520 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.341623 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerStarted","Data":"b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973"} Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.386447 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.469990202 podStartE2EDuration="6.386424548s" podCreationTimestamp="2026-02-27 17:19:30 +0000 UTC" firstStartedPulling="2026-02-27 17:19:31.132169634 +0000 UTC m=+1569.647967221" lastFinishedPulling="2026-02-27 17:19:35.04860395 +0000 UTC m=+1573.564401567" observedRunningTime="2026-02-27 17:19:36.372731583 +0000 UTC m=+1574.888529210" watchObservedRunningTime="2026-02-27 17:19:36.386424548 +0000 UTC m=+1574.902222165" Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.907918 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-78cd565959-8qqxf" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.220:5353: i/o timeout" Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.926702 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.985526 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2nv\" (UniqueName: \"kubernetes.io/projected/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-kube-api-access-cm2nv\") pod \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.985684 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-config-data\") pod \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.985861 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-combined-ca-bundle\") pod \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " Feb 27 17:19:36 crc kubenswrapper[4708]: I0227 17:19:36.985994 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-scripts\") pod \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\" (UID: \"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1\") " Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:36.996009 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-kube-api-access-cm2nv" (OuterVolumeSpecName: "kube-api-access-cm2nv") pod "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" (UID: "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1"). InnerVolumeSpecName "kube-api-access-cm2nv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.003958 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-scripts" (OuterVolumeSpecName: "scripts") pod "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" (UID: "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.024619 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" (UID: "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.046980 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-config-data" (OuterVolumeSpecName: "config-data") pod "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" (UID: "a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.088823 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2nv\" (UniqueName: \"kubernetes.io/projected/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-kube-api-access-cm2nv\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.088866 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.088876 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.088885 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.389110 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-szwb6" event={"ID":"a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1","Type":"ContainerDied","Data":"dfb50dc02d43cf7dc14f3199328762727711d42424ac33821623faa9d61843c3"} Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.389176 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-szwb6" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.389195 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb50dc02d43cf7dc14f3199328762727711d42424ac33821623faa9d61843c3" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.390161 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.570372 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.570635 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-log" containerID="cri-o://720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7" gracePeriod=30 Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.570721 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-api" containerID="cri-o://bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e" gracePeriod=30 Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.582831 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.583074 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" containerName="nova-scheduler-scheduler" containerID="cri-o://acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" gracePeriod=30 Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.617052 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.617572 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-log" containerID="cri-o://fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97" gracePeriod=30 Feb 27 17:19:37 crc kubenswrapper[4708]: I0227 17:19:37.617708 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-metadata" containerID="cri-o://3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf" gracePeriod=30 Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.214601 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.313438 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-public-tls-certs\") pod \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.313500 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vr5lj\" (UniqueName: \"kubernetes.io/projected/a4c7fb6c-80fc-404b-883c-10da2cea06d6-kube-api-access-vr5lj\") pod \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.313534 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7fb6c-80fc-404b-883c-10da2cea06d6-logs\") pod \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.313619 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-internal-tls-certs\") pod \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.313717 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-config-data\") pod \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.313821 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-combined-ca-bundle\") pod \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\" (UID: \"a4c7fb6c-80fc-404b-883c-10da2cea06d6\") " Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.314251 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4c7fb6c-80fc-404b-883c-10da2cea06d6-logs" (OuterVolumeSpecName: "logs") pod "a4c7fb6c-80fc-404b-883c-10da2cea06d6" (UID: "a4c7fb6c-80fc-404b-883c-10da2cea06d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.329998 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4c7fb6c-80fc-404b-883c-10da2cea06d6-kube-api-access-vr5lj" (OuterVolumeSpecName: "kube-api-access-vr5lj") pod "a4c7fb6c-80fc-404b-883c-10da2cea06d6" (UID: "a4c7fb6c-80fc-404b-883c-10da2cea06d6"). InnerVolumeSpecName "kube-api-access-vr5lj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.350031 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a4c7fb6c-80fc-404b-883c-10da2cea06d6" (UID: "a4c7fb6c-80fc-404b-883c-10da2cea06d6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.363982 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-config-data" (OuterVolumeSpecName: "config-data") pod "a4c7fb6c-80fc-404b-883c-10da2cea06d6" (UID: "a4c7fb6c-80fc-404b-883c-10da2cea06d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.373407 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "a4c7fb6c-80fc-404b-883c-10da2cea06d6" (UID: "a4c7fb6c-80fc-404b-883c-10da2cea06d6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.381277 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "a4c7fb6c-80fc-404b-883c-10da2cea06d6" (UID: "a4c7fb6c-80fc-404b-883c-10da2cea06d6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.409587 4708 generic.go:334] "Generic (PLEG): container finished" podID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerID="bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e" exitCode=0 Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.409617 4708 generic.go:334] "Generic (PLEG): container finished" podID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerID="720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7" exitCode=143 Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.409660 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.409676 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4c7fb6c-80fc-404b-883c-10da2cea06d6","Type":"ContainerDied","Data":"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e"} Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.409704 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4c7fb6c-80fc-404b-883c-10da2cea06d6","Type":"ContainerDied","Data":"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7"} Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.409713 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"a4c7fb6c-80fc-404b-883c-10da2cea06d6","Type":"ContainerDied","Data":"92a7cc9d6e627b0a907313a42fb5338f576650d0e6188ea4f951615fbc9a46ed"} Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.409728 4708 scope.go:117] "RemoveContainer" containerID="bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.417386 4708 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.417416 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.417425 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.417433 4708 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/a4c7fb6c-80fc-404b-883c-10da2cea06d6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.417441 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vr5lj\" (UniqueName: \"kubernetes.io/projected/a4c7fb6c-80fc-404b-883c-10da2cea06d6-kube-api-access-vr5lj\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.417451 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a4c7fb6c-80fc-404b-883c-10da2cea06d6-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.417551 4708 generic.go:334] "Generic (PLEG): container finished" podID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerID="fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97" exitCode=143 Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.418279 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d","Type":"ContainerDied","Data":"fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97"} Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.449598 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.450437 4708 scope.go:117] "RemoveContainer" containerID="720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.458140 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.479319 4708 scope.go:117] "RemoveContainer" containerID="bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e" Feb 27 17:19:38 crc kubenswrapper[4708]: E0227 17:19:38.479744 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e\": container with ID starting with bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e not found: ID does not exist" containerID="bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.479866 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e"} err="failed to get container status \"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e\": rpc error: code = NotFound desc = could not find container \"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e\": container with ID starting with bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e not found: ID does not exist" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.479946 4708 scope.go:117] "RemoveContainer" containerID="720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.480634 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:38 crc kubenswrapper[4708]: E0227 17:19:38.481049 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" containerName="nova-manage" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481066 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" containerName="nova-manage" Feb 27 17:19:38 crc kubenswrapper[4708]: E0227 17:19:38.481099 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" containerName="dnsmasq-dns" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481106 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" containerName="dnsmasq-dns" Feb 27 17:19:38 crc kubenswrapper[4708]: E0227 17:19:38.481117 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-api" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481123 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-api" Feb 27 17:19:38 crc kubenswrapper[4708]: E0227 17:19:38.481134 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" containerName="init" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481141 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" containerName="init" Feb 27 17:19:38 crc kubenswrapper[4708]: E0227 17:19:38.481152 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-log" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481157 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-log" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481343 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-log" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481357 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e57f23c6-0486-40ad-907d-7776d4d30404" containerName="dnsmasq-dns" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481370 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" containerName="nova-api-api" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481385 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" containerName="nova-manage" Feb 27 17:19:38 crc kubenswrapper[4708]: E0227 17:19:38.481566 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7\": container with ID starting with 720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7 not found: ID does not exist" containerID="720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481716 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7"} err="failed to get container status \"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7\": rpc error: code = NotFound desc = could not find container \"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7\": container with ID starting with 720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7 not found: ID does not exist" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.481776 4708 scope.go:117] "RemoveContainer" containerID="bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.482112 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e"} err="failed to get container status \"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e\": rpc error: code = NotFound desc = could not find container \"bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e\": container with ID starting with bb5050d3a9240a266d2c9a99f351bc4d0f254616f76a019aefa44e53e95dda4e not found: ID does not exist" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.482193 4708 scope.go:117] "RemoveContainer" containerID="720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.482410 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.482510 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7"} err="failed to get container status \"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7\": rpc error: code = NotFound desc = could not find container \"720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7\": container with ID starting with 720777650c574c96aacd1211c0dcc0ec01548f7752856bcc4928e51531d007f7 not found: ID does not exist" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.484561 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.484796 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.485223 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.498381 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.522256 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.522369 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-config-data\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.522462 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8n9nf\" (UniqueName: \"kubernetes.io/projected/1b557ce2-14db-4777-927b-045eccbac5e5-kube-api-access-8n9nf\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.522512 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b557ce2-14db-4777-927b-045eccbac5e5-logs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.522542 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.522635 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.624819 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.624928 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.624976 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-config-data\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.625025 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8n9nf\" (UniqueName: \"kubernetes.io/projected/1b557ce2-14db-4777-927b-045eccbac5e5-kube-api-access-8n9nf\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.625047 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b557ce2-14db-4777-927b-045eccbac5e5-logs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.625070 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.625754 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1b557ce2-14db-4777-927b-045eccbac5e5-logs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.627884 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-internal-tls-certs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.629800 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-public-tls-certs\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.630273 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.631658 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1b557ce2-14db-4777-927b-045eccbac5e5-config-data\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.641214 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8n9nf\" (UniqueName: \"kubernetes.io/projected/1b557ce2-14db-4777-927b-045eccbac5e5-kube-api-access-8n9nf\") pod \"nova-api-0\" (UID: \"1b557ce2-14db-4777-927b-045eccbac5e5\") " pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.851799 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.927393 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cdvx5"] Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.929636 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:38 crc kubenswrapper[4708]: I0227 17:19:38.972618 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdvx5"] Feb 27 17:19:39 crc kubenswrapper[4708]: E0227 17:19:39.018734 4708 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c is running failed: container process not found" containerID="acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:19:39 crc kubenswrapper[4708]: E0227 17:19:39.030288 4708 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c is running failed: container process not found" containerID="acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:19:39 crc kubenswrapper[4708]: E0227 17:19:39.036282 4708 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c is running failed: container process not found" containerID="acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Feb 27 17:19:39 crc kubenswrapper[4708]: E0227 17:19:39.036322 4708 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c is running failed: container process not found" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" containerName="nova-scheduler-scheduler" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.037256 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjhmn\" (UniqueName: \"kubernetes.io/projected/3856bd24-a61f-4c56-bfe9-5734964010fc-kube-api-access-sjhmn\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.037333 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-utilities\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.037429 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-catalog-content\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.138468 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-catalog-content\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.138728 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjhmn\" (UniqueName: \"kubernetes.io/projected/3856bd24-a61f-4c56-bfe9-5734964010fc-kube-api-access-sjhmn\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.138836 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-utilities\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.141355 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-catalog-content\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.141869 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-utilities\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.165488 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjhmn\" (UniqueName: \"kubernetes.io/projected/3856bd24-a61f-4c56-bfe9-5734964010fc-kube-api-access-sjhmn\") pod \"redhat-operators-cdvx5\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.173178 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.245921 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-combined-ca-bundle\") pod \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.246017 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cpt94\" (UniqueName: \"kubernetes.io/projected/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-kube-api-access-cpt94\") pod \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.246260 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-config-data\") pod \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\" (UID: \"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc\") " Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.253410 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-kube-api-access-cpt94" (OuterVolumeSpecName: "kube-api-access-cpt94") pod "aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" (UID: "aac9dfe8-f287-48b4-bebb-80f6d4ce57cc"). InnerVolumeSpecName "kube-api-access-cpt94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.284566 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-config-data" (OuterVolumeSpecName: "config-data") pod "aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" (UID: "aac9dfe8-f287-48b4-bebb-80f6d4ce57cc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.300727 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" (UID: "aac9dfe8-f287-48b4-bebb-80f6d4ce57cc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.333568 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.348009 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.348031 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cpt94\" (UniqueName: \"kubernetes.io/projected/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-kube-api-access-cpt94\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.348041 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.433117 4708 generic.go:334] "Generic (PLEG): container finished" podID="aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" containerID="acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" exitCode=0 Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.433170 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc","Type":"ContainerDied","Data":"acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c"} Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.433199 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"aac9dfe8-f287-48b4-bebb-80f6d4ce57cc","Type":"ContainerDied","Data":"4099b7a26a0ce03bc355ab61834775bd12c14debb0e5a1e961e57f84a797c47a"} Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.433218 4708 scope.go:117] "RemoveContainer" containerID="acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.433312 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.462461 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.501879 4708 scope.go:117] "RemoveContainer" containerID="acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.501969 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:19:39 crc kubenswrapper[4708]: E0227 17:19:39.504554 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c\": container with ID starting with acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c not found: ID does not exist" containerID="acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.504593 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c"} err="failed to get container status \"acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c\": rpc error: code = NotFound desc = could not find container \"acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c\": container with ID starting with acda37bc0f66d86a4c1d2800b55e2c3561c80a2387e941c2728022975f39916c not found: ID does not exist" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.536928 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.549004 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:19:39 crc kubenswrapper[4708]: E0227 17:19:39.549475 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" containerName="nova-scheduler-scheduler" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.549490 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" containerName="nova-scheduler-scheduler" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.549693 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" containerName="nova-scheduler-scheduler" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.550440 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.553124 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.581673 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.665106 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5g885\" (UniqueName: \"kubernetes.io/projected/d1d2c8fb-e050-4235-b072-367cb5dd24d6-kube-api-access-5g885\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.665265 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d2c8fb-e050-4235-b072-367cb5dd24d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.665319 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d2c8fb-e050-4235-b072-367cb5dd24d6-config-data\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.766577 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d2c8fb-e050-4235-b072-367cb5dd24d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.766646 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d2c8fb-e050-4235-b072-367cb5dd24d6-config-data\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.766742 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5g885\" (UniqueName: \"kubernetes.io/projected/d1d2c8fb-e050-4235-b072-367cb5dd24d6-kube-api-access-5g885\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.770093 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1d2c8fb-e050-4235-b072-367cb5dd24d6-config-data\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.775467 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d1d2c8fb-e050-4235-b072-367cb5dd24d6-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.785829 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5g885\" (UniqueName: \"kubernetes.io/projected/d1d2c8fb-e050-4235-b072-367cb5dd24d6-kube-api-access-5g885\") pod \"nova-scheduler-0\" (UID: \"d1d2c8fb-e050-4235-b072-367cb5dd24d6\") " pod="openstack/nova-scheduler-0" Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.844650 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cdvx5"] Feb 27 17:19:39 crc kubenswrapper[4708]: I0227 17:19:39.883773 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.238536 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4c7fb6c-80fc-404b-883c-10da2cea06d6" path="/var/lib/kubelet/pods/a4c7fb6c-80fc-404b-883c-10da2cea06d6/volumes" Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.239443 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aac9dfe8-f287-48b4-bebb-80f6d4ce57cc" path="/var/lib/kubelet/pods/aac9dfe8-f287-48b4-bebb-80f6d4ce57cc/volumes" Feb 27 17:19:40 crc kubenswrapper[4708]: W0227 17:19:40.359166 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1d2c8fb_e050_4235_b072_367cb5dd24d6.slice/crio-d6e683e5903c59385c2ac9a91b858c0b64fb5ef4a255faaffc919bc3b1589192 WatchSource:0}: Error finding container d6e683e5903c59385c2ac9a91b858c0b64fb5ef4a255faaffc919bc3b1589192: Status 404 returned error can't find the container with id d6e683e5903c59385c2ac9a91b858c0b64fb5ef4a255faaffc919bc3b1589192 Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.359294 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.473032 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b557ce2-14db-4777-927b-045eccbac5e5","Type":"ContainerStarted","Data":"ec12fad22ff2a0361c927dd8576431178b2d9636a645618258b2fae413448843"} Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.473285 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b557ce2-14db-4777-927b-045eccbac5e5","Type":"ContainerStarted","Data":"70d9a45f402cfe6231c267e041baae3ffb8e2ea3ba09c6741225e16a2c011f3c"} Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.473296 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"1b557ce2-14db-4777-927b-045eccbac5e5","Type":"ContainerStarted","Data":"08286ce112e472a15affc4ade47c8cd6bc0ae033ab0a08ea20f421afdef8578c"} Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.486124 4708 generic.go:334] "Generic (PLEG): container finished" podID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerID="6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6" exitCode=0 Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.486383 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdvx5" event={"ID":"3856bd24-a61f-4c56-bfe9-5734964010fc","Type":"ContainerDied","Data":"6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6"} Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.486435 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdvx5" event={"ID":"3856bd24-a61f-4c56-bfe9-5734964010fc","Type":"ContainerStarted","Data":"ee85f79cadd6d17eac940346094aa8733f0ddd7542bfd30c7be911aec655720d"} Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.500319 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1d2c8fb-e050-4235-b072-367cb5dd24d6","Type":"ContainerStarted","Data":"d6e683e5903c59385c2ac9a91b858c0b64fb5ef4a255faaffc919bc3b1589192"} Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.514960 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.514939472 podStartE2EDuration="2.514939472s" podCreationTimestamp="2026-02-27 17:19:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:40.503242244 +0000 UTC m=+1579.019039831" watchObservedRunningTime="2026-02-27 17:19:40.514939472 +0000 UTC m=+1579.030737059" Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.822308 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": read tcp 10.217.0.2:48386->10.217.0.225:8775: read: connection reset by peer" Feb 27 17:19:40 crc kubenswrapper[4708]: I0227 17:19:40.822420 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.225:8775/\": read tcp 10.217.0.2:48370->10.217.0.225:8775: read: connection reset by peer" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.364550 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.415408 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-config-data\") pod \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.415473 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-logs\") pod \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.415500 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-nova-metadata-tls-certs\") pod \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.415568 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2s7vm\" (UniqueName: \"kubernetes.io/projected/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-kube-api-access-2s7vm\") pod \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.415669 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-combined-ca-bundle\") pod \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\" (UID: \"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d\") " Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.417003 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-logs" (OuterVolumeSpecName: "logs") pod "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" (UID: "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.429044 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-kube-api-access-2s7vm" (OuterVolumeSpecName: "kube-api-access-2s7vm") pod "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" (UID: "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d"). InnerVolumeSpecName "kube-api-access-2s7vm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.457389 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-config-data" (OuterVolumeSpecName: "config-data") pod "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" (UID: "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.471924 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" (UID: "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.517572 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.517601 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.517610 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.517619 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2s7vm\" (UniqueName: \"kubernetes.io/projected/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-kube-api-access-2s7vm\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.520109 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"d1d2c8fb-e050-4235-b072-367cb5dd24d6","Type":"ContainerStarted","Data":"178af03ca513d6b71b3a4192be8e685c8ddddab23393cf890dce4928747ecc5b"} Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.526081 4708 generic.go:334] "Generic (PLEG): container finished" podID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerID="3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf" exitCode=0 Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.526896 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d","Type":"ContainerDied","Data":"3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf"} Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.526924 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"dc7cda80-863e-427e-83d1-ba8ba4ef8b3d","Type":"ContainerDied","Data":"7fd595bc043103fe00c3d0da07f7bd17ded8f3700eb7385b1ecb551158ee6ac7"} Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.526944 4708 scope.go:117] "RemoveContainer" containerID="3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.526946 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.552249 4708 scope.go:117] "RemoveContainer" containerID="fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.554577 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.55455456 podStartE2EDuration="2.55455456s" podCreationTimestamp="2026-02-27 17:19:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:41.538328064 +0000 UTC m=+1580.054125651" watchObservedRunningTime="2026-02-27 17:19:41.55455456 +0000 UTC m=+1580.070352157" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.559022 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" (UID: "dc7cda80-863e-427e-83d1-ba8ba4ef8b3d"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.582889 4708 scope.go:117] "RemoveContainer" containerID="3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf" Feb 27 17:19:41 crc kubenswrapper[4708]: E0227 17:19:41.587013 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf\": container with ID starting with 3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf not found: ID does not exist" containerID="3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.587063 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf"} err="failed to get container status \"3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf\": rpc error: code = NotFound desc = could not find container \"3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf\": container with ID starting with 3d544791e73d4f674be2f6abcaceb7d5c72d1bd4235e1a57c21efe226bdc7caf not found: ID does not exist" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.587090 4708 scope.go:117] "RemoveContainer" containerID="fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97" Feb 27 17:19:41 crc kubenswrapper[4708]: E0227 17:19:41.588398 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97\": container with ID starting with fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97 not found: ID does not exist" containerID="fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.588428 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97"} err="failed to get container status \"fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97\": rpc error: code = NotFound desc = could not find container \"fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97\": container with ID starting with fd9865d463e61e8b5e6ed048386215f3808fd9f161ddcdfdb9020fedec0f5c97 not found: ID does not exist" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.619611 4708 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.914032 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.924436 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.957221 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:19:41 crc kubenswrapper[4708]: E0227 17:19:41.957754 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-log" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.957774 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-log" Feb 27 17:19:41 crc kubenswrapper[4708]: E0227 17:19:41.957813 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-metadata" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.957821 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-metadata" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.958095 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-metadata" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.958124 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" containerName="nova-metadata-log" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.959438 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.965202 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.965317 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Feb 27 17:19:41 crc kubenswrapper[4708]: I0227 17:19:41.983092 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.029932 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d441abe7-688c-4023-b44a-badbf0e2365b-logs\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.030026 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgr7t\" (UniqueName: \"kubernetes.io/projected/d441abe7-688c-4023-b44a-badbf0e2365b-kube-api-access-cgr7t\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.030436 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-config-data\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.030498 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.030581 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.131489 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d441abe7-688c-4023-b44a-badbf0e2365b-logs\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.131577 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgr7t\" (UniqueName: \"kubernetes.io/projected/d441abe7-688c-4023-b44a-badbf0e2365b-kube-api-access-cgr7t\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.131671 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-config-data\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.131703 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.131747 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.133691 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d441abe7-688c-4023-b44a-badbf0e2365b-logs\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.135738 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.137537 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-config-data\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.138467 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/d441abe7-688c-4023-b44a-badbf0e2365b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.155353 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgr7t\" (UniqueName: \"kubernetes.io/projected/d441abe7-688c-4023-b44a-badbf0e2365b-kube-api-access-cgr7t\") pod \"nova-metadata-0\" (UID: \"d441abe7-688c-4023-b44a-badbf0e2365b\") " pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.249062 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc7cda80-863e-427e-83d1-ba8ba4ef8b3d" path="/var/lib/kubelet/pods/dc7cda80-863e-427e-83d1-ba8ba4ef8b3d/volumes" Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.282000 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Feb 27 17:19:42 crc kubenswrapper[4708]: W0227 17:19:42.798454 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd441abe7_688c_4023_b44a_badbf0e2365b.slice/crio-1c9d4480d4ddf7ea9ca64f534a61914d9ff2fb1f6c865a8cd10021fe521f513b WatchSource:0}: Error finding container 1c9d4480d4ddf7ea9ca64f534a61914d9ff2fb1f6c865a8cd10021fe521f513b: Status 404 returned error can't find the container with id 1c9d4480d4ddf7ea9ca64f534a61914d9ff2fb1f6c865a8cd10021fe521f513b Feb 27 17:19:42 crc kubenswrapper[4708]: I0227 17:19:42.799295 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Feb 27 17:19:43 crc kubenswrapper[4708]: I0227 17:19:43.563254 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d441abe7-688c-4023-b44a-badbf0e2365b","Type":"ContainerStarted","Data":"ab9c9f0c30f27ae74cad6909e4ba2551bfa1d693241641c90f172711ba3feac4"} Feb 27 17:19:43 crc kubenswrapper[4708]: I0227 17:19:43.563591 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d441abe7-688c-4023-b44a-badbf0e2365b","Type":"ContainerStarted","Data":"f8df8c4d67bbd21aef6605e67bd5be6d6d2942a02361328f2d64ef7448006a3c"} Feb 27 17:19:43 crc kubenswrapper[4708]: I0227 17:19:43.563616 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"d441abe7-688c-4023-b44a-badbf0e2365b","Type":"ContainerStarted","Data":"1c9d4480d4ddf7ea9ca64f534a61914d9ff2fb1f6c865a8cd10021fe521f513b"} Feb 27 17:19:43 crc kubenswrapper[4708]: I0227 17:19:43.569705 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdvx5" event={"ID":"3856bd24-a61f-4c56-bfe9-5734964010fc","Type":"ContainerStarted","Data":"90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2"} Feb 27 17:19:43 crc kubenswrapper[4708]: I0227 17:19:43.602163 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.6021380499999998 podStartE2EDuration="2.60213805s" podCreationTimestamp="2026-02-27 17:19:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:19:43.583179047 +0000 UTC m=+1582.098976744" watchObservedRunningTime="2026-02-27 17:19:43.60213805 +0000 UTC m=+1582.117935667" Feb 27 17:19:44 crc kubenswrapper[4708]: I0227 17:19:44.884504 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Feb 27 17:19:47 crc kubenswrapper[4708]: I0227 17:19:47.282664 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:19:47 crc kubenswrapper[4708]: I0227 17:19:47.283229 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Feb 27 17:19:48 crc kubenswrapper[4708]: I0227 17:19:48.642720 4708 generic.go:334] "Generic (PLEG): container finished" podID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerID="90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2" exitCode=0 Feb 27 17:19:48 crc kubenswrapper[4708]: I0227 17:19:48.642820 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdvx5" event={"ID":"3856bd24-a61f-4c56-bfe9-5734964010fc","Type":"ContainerDied","Data":"90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2"} Feb 27 17:19:48 crc kubenswrapper[4708]: I0227 17:19:48.853767 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:19:48 crc kubenswrapper[4708]: I0227 17:19:48.854320 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Feb 27 17:19:49 crc kubenswrapper[4708]: I0227 17:19:49.660993 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdvx5" event={"ID":"3856bd24-a61f-4c56-bfe9-5734964010fc","Type":"ContainerStarted","Data":"9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e"} Feb 27 17:19:49 crc kubenswrapper[4708]: I0227 17:19:49.691482 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cdvx5" podStartSLOduration=3.1011200309999998 podStartE2EDuration="11.691464123s" podCreationTimestamp="2026-02-27 17:19:38 +0000 UTC" firstStartedPulling="2026-02-27 17:19:40.489990831 +0000 UTC m=+1579.005788418" lastFinishedPulling="2026-02-27 17:19:49.080334913 +0000 UTC m=+1587.596132510" observedRunningTime="2026-02-27 17:19:49.685582808 +0000 UTC m=+1588.201380405" watchObservedRunningTime="2026-02-27 17:19:49.691464123 +0000 UTC m=+1588.207261720" Feb 27 17:19:49 crc kubenswrapper[4708]: I0227 17:19:49.868019 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b557ce2-14db-4777-927b-045eccbac5e5" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.234:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:49 crc kubenswrapper[4708]: I0227 17:19:49.868049 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="1b557ce2-14db-4777-927b-045eccbac5e5" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.234:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:49 crc kubenswrapper[4708]: I0227 17:19:49.884675 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Feb 27 17:19:49 crc kubenswrapper[4708]: I0227 17:19:49.942638 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Feb 27 17:19:50 crc kubenswrapper[4708]: I0227 17:19:50.723222 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Feb 27 17:19:52 crc kubenswrapper[4708]: I0227 17:19:52.282366 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:19:52 crc kubenswrapper[4708]: I0227 17:19:52.282457 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Feb 27 17:19:53 crc kubenswrapper[4708]: I0227 17:19:53.299019 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d441abe7-688c-4023-b44a-badbf0e2365b" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.237:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:53 crc kubenswrapper[4708]: I0227 17:19:53.299046 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="d441abe7-688c-4023-b44a-badbf0e2365b" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.237:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 27 17:19:58 crc kubenswrapper[4708]: I0227 17:19:58.864106 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:19:58 crc kubenswrapper[4708]: I0227 17:19:58.864458 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Feb 27 17:19:58 crc kubenswrapper[4708]: I0227 17:19:58.864723 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:19:58 crc kubenswrapper[4708]: I0227 17:19:58.864769 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Feb 27 17:19:58 crc kubenswrapper[4708]: I0227 17:19:58.874810 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:19:58 crc kubenswrapper[4708]: I0227 17:19:58.875054 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Feb 27 17:19:59 crc kubenswrapper[4708]: I0227 17:19:59.334710 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:19:59 crc kubenswrapper[4708]: I0227 17:19:59.334817 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.156309 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536880-2qjgw"] Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.159017 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-2qjgw" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.161561 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.161818 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.162528 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.164792 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-2qjgw"] Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.252071 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjxcc\" (UniqueName: \"kubernetes.io/projected/5579d0a9-c19d-4b34-9636-40eab7128bc4-kube-api-access-vjxcc\") pod \"auto-csr-approver-29536880-2qjgw\" (UID: \"5579d0a9-c19d-4b34-9636-40eab7128bc4\") " pod="openshift-infra/auto-csr-approver-29536880-2qjgw" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.354382 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjxcc\" (UniqueName: \"kubernetes.io/projected/5579d0a9-c19d-4b34-9636-40eab7128bc4-kube-api-access-vjxcc\") pod \"auto-csr-approver-29536880-2qjgw\" (UID: \"5579d0a9-c19d-4b34-9636-40eab7128bc4\") " pod="openshift-infra/auto-csr-approver-29536880-2qjgw" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.376940 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjxcc\" (UniqueName: \"kubernetes.io/projected/5579d0a9-c19d-4b34-9636-40eab7128bc4-kube-api-access-vjxcc\") pod \"auto-csr-approver-29536880-2qjgw\" (UID: \"5579d0a9-c19d-4b34-9636-40eab7128bc4\") " pod="openshift-infra/auto-csr-approver-29536880-2qjgw" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.397708 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdvx5" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="registry-server" probeResult="failure" output=< Feb 27 17:20:00 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:20:00 crc kubenswrapper[4708]: > Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.476908 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-2qjgw" Feb 27 17:20:00 crc kubenswrapper[4708]: I0227 17:20:00.520664 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 17:20:01 crc kubenswrapper[4708]: I0227 17:20:01.060674 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-2qjgw"] Feb 27 17:20:01 crc kubenswrapper[4708]: W0227 17:20:01.060676 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5579d0a9_c19d_4b34_9636_40eab7128bc4.slice/crio-4f0be72dd723831f0ece9dd43683a66abd46abb135a08730b931a18493cea9ff WatchSource:0}: Error finding container 4f0be72dd723831f0ece9dd43683a66abd46abb135a08730b931a18493cea9ff: Status 404 returned error can't find the container with id 4f0be72dd723831f0ece9dd43683a66abd46abb135a08730b931a18493cea9ff Feb 27 17:20:01 crc kubenswrapper[4708]: I0227 17:20:01.808977 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536880-2qjgw" event={"ID":"5579d0a9-c19d-4b34-9636-40eab7128bc4","Type":"ContainerStarted","Data":"4f0be72dd723831f0ece9dd43683a66abd46abb135a08730b931a18493cea9ff"} Feb 27 17:20:02 crc kubenswrapper[4708]: I0227 17:20:02.289880 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:20:02 crc kubenswrapper[4708]: I0227 17:20:02.292716 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Feb 27 17:20:02 crc kubenswrapper[4708]: I0227 17:20:02.298029 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:20:02 crc kubenswrapper[4708]: I0227 17:20:02.856153 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Feb 27 17:20:03 crc kubenswrapper[4708]: I0227 17:20:03.842991 4708 generic.go:334] "Generic (PLEG): container finished" podID="5579d0a9-c19d-4b34-9636-40eab7128bc4" containerID="8a678cc75fb3233a08743bbb4def5bb1881eb46b274706e2010bbb929a737f30" exitCode=0 Feb 27 17:20:03 crc kubenswrapper[4708]: I0227 17:20:03.843087 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536880-2qjgw" event={"ID":"5579d0a9-c19d-4b34-9636-40eab7128bc4","Type":"ContainerDied","Data":"8a678cc75fb3233a08743bbb4def5bb1881eb46b274706e2010bbb929a737f30"} Feb 27 17:20:04 crc kubenswrapper[4708]: I0227 17:20:04.295311 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:20:04 crc kubenswrapper[4708]: I0227 17:20:04.295547 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="83739fce-8870-491c-844b-9674e73b937a" containerName="kube-state-metrics" containerID="cri-o://464ab374952e9ea798847dd85f9ad750f5e3919a70afea2e2dfeee4d20ae9791" gracePeriod=30 Feb 27 17:20:04 crc kubenswrapper[4708]: I0227 17:20:04.853625 4708 generic.go:334] "Generic (PLEG): container finished" podID="83739fce-8870-491c-844b-9674e73b937a" containerID="464ab374952e9ea798847dd85f9ad750f5e3919a70afea2e2dfeee4d20ae9791" exitCode=2 Feb 27 17:20:04 crc kubenswrapper[4708]: I0227 17:20:04.853712 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83739fce-8870-491c-844b-9674e73b937a","Type":"ContainerDied","Data":"464ab374952e9ea798847dd85f9ad750f5e3919a70afea2e2dfeee4d20ae9791"} Feb 27 17:20:04 crc kubenswrapper[4708]: I0227 17:20:04.853957 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"83739fce-8870-491c-844b-9674e73b937a","Type":"ContainerDied","Data":"7da37157e1c99d3ee20e321ae7211883f83203e17c4d2ac27961138f3b388681"} Feb 27 17:20:04 crc kubenswrapper[4708]: I0227 17:20:04.853971 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da37157e1c99d3ee20e321ae7211883f83203e17c4d2ac27961138f3b388681" Feb 27 17:20:04 crc kubenswrapper[4708]: I0227 17:20:04.878285 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.052434 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dczss\" (UniqueName: \"kubernetes.io/projected/83739fce-8870-491c-844b-9674e73b937a-kube-api-access-dczss\") pod \"83739fce-8870-491c-844b-9674e73b937a\" (UID: \"83739fce-8870-491c-844b-9674e73b937a\") " Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.059749 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83739fce-8870-491c-844b-9674e73b937a-kube-api-access-dczss" (OuterVolumeSpecName: "kube-api-access-dczss") pod "83739fce-8870-491c-844b-9674e73b937a" (UID: "83739fce-8870-491c-844b-9674e73b937a"). InnerVolumeSpecName "kube-api-access-dczss". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.155405 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dczss\" (UniqueName: \"kubernetes.io/projected/83739fce-8870-491c-844b-9674e73b937a-kube-api-access-dczss\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.323694 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-2qjgw" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.462215 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjxcc\" (UniqueName: \"kubernetes.io/projected/5579d0a9-c19d-4b34-9636-40eab7128bc4-kube-api-access-vjxcc\") pod \"5579d0a9-c19d-4b34-9636-40eab7128bc4\" (UID: \"5579d0a9-c19d-4b34-9636-40eab7128bc4\") " Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.473058 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5579d0a9-c19d-4b34-9636-40eab7128bc4-kube-api-access-vjxcc" (OuterVolumeSpecName: "kube-api-access-vjxcc") pod "5579d0a9-c19d-4b34-9636-40eab7128bc4" (UID: "5579d0a9-c19d-4b34-9636-40eab7128bc4"). InnerVolumeSpecName "kube-api-access-vjxcc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.565499 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjxcc\" (UniqueName: \"kubernetes.io/projected/5579d0a9-c19d-4b34-9636-40eab7128bc4-kube-api-access-vjxcc\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.631978 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.632035 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.632076 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.632910 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.632971 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" gracePeriod=600 Feb 27 17:20:05 crc kubenswrapper[4708]: E0227 17:20:05.751675 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.868346 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536880-2qjgw" event={"ID":"5579d0a9-c19d-4b34-9636-40eab7128bc4","Type":"ContainerDied","Data":"4f0be72dd723831f0ece9dd43683a66abd46abb135a08730b931a18493cea9ff"} Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.868385 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f0be72dd723831f0ece9dd43683a66abd46abb135a08730b931a18493cea9ff" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.868440 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536880-2qjgw" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.871639 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" exitCode=0 Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.871711 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.871708 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e"} Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.871766 4708 scope.go:117] "RemoveContainer" containerID="c1a4a3b793414b4b10c54d77ec77375b6657e6d822660a8ebe494db8ea78162c" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.872580 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:20:05 crc kubenswrapper[4708]: E0227 17:20:05.872957 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.945539 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:20:05 crc kubenswrapper[4708]: I0227 17:20:05.986970 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.035906 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:20:06 crc kubenswrapper[4708]: E0227 17:20:06.036394 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83739fce-8870-491c-844b-9674e73b937a" containerName="kube-state-metrics" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.036410 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="83739fce-8870-491c-844b-9674e73b937a" containerName="kube-state-metrics" Feb 27 17:20:06 crc kubenswrapper[4708]: E0227 17:20:06.036418 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5579d0a9-c19d-4b34-9636-40eab7128bc4" containerName="oc" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.036425 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5579d0a9-c19d-4b34-9636-40eab7128bc4" containerName="oc" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.036623 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5579d0a9-c19d-4b34-9636-40eab7128bc4" containerName="oc" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.036644 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="83739fce-8870-491c-844b-9674e73b937a" containerName="kube-state-metrics" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.037386 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.041188 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.041391 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.074907 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.087644 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.087933 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.087997 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.088058 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-924v5\" (UniqueName: \"kubernetes.io/projected/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-api-access-924v5\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.190019 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.190192 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.190235 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.190284 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-924v5\" (UniqueName: \"kubernetes.io/projected/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-api-access-924v5\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.195039 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.195211 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.206516 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.206950 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-924v5\" (UniqueName: \"kubernetes.io/projected/e8569f7c-7242-437f-80b5-0146d75c19c5-kube-api-access-924v5\") pod \"kube-state-metrics-0\" (UID: \"e8569f7c-7242-437f-80b5-0146d75c19c5\") " pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.254550 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83739fce-8870-491c-844b-9674e73b937a" path="/var/lib/kubelet/pods/83739fce-8870-491c-844b-9674e73b937a/volumes" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.321442 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.321709 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-central-agent" containerID="cri-o://6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20" gracePeriod=30 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.321817 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-notification-agent" containerID="cri-o://b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283" gracePeriod=30 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.321827 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="sg-core" containerID="cri-o://9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898" gracePeriod=30 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.321783 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="proxy-httpd" containerID="cri-o://b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973" gracePeriod=30 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.364891 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.392298 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-bdrv8"] Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.406808 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536874-bdrv8"] Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.824339 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Feb 27 17:20:06 crc kubenswrapper[4708]: W0227 17:20:06.840083 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode8569f7c_7242_437f_80b5_0146d75c19c5.slice/crio-b6245b90bec2ed001f915ba5b659973c2b593e2d09dd2c94ce6f5b068a3ea991 WatchSource:0}: Error finding container b6245b90bec2ed001f915ba5b659973c2b593e2d09dd2c94ce6f5b068a3ea991: Status 404 returned error can't find the container with id b6245b90bec2ed001f915ba5b659973c2b593e2d09dd2c94ce6f5b068a3ea991 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.886319 4708 generic.go:334] "Generic (PLEG): container finished" podID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerID="b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973" exitCode=0 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.886365 4708 generic.go:334] "Generic (PLEG): container finished" podID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerID="9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898" exitCode=2 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.886374 4708 generic.go:334] "Generic (PLEG): container finished" podID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerID="6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20" exitCode=0 Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.886400 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerDied","Data":"b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973"} Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.886459 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerDied","Data":"9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898"} Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.886471 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerDied","Data":"6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20"} Feb 27 17:20:06 crc kubenswrapper[4708]: I0227 17:20:06.887713 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8569f7c-7242-437f-80b5-0146d75c19c5","Type":"ContainerStarted","Data":"b6245b90bec2ed001f915ba5b659973c2b593e2d09dd2c94ce6f5b068a3ea991"} Feb 27 17:20:07 crc kubenswrapper[4708]: I0227 17:20:07.903484 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"e8569f7c-7242-437f-80b5-0146d75c19c5","Type":"ContainerStarted","Data":"d734e0c0c8fe8fddde332041613e6a83a2011e3b1ae7bbb8a4d4d5db12c1b31a"} Feb 27 17:20:07 crc kubenswrapper[4708]: I0227 17:20:07.903612 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Feb 27 17:20:07 crc kubenswrapper[4708]: I0227 17:20:07.931448 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.5675123859999998 podStartE2EDuration="2.931423831s" podCreationTimestamp="2026-02-27 17:20:05 +0000 UTC" firstStartedPulling="2026-02-27 17:20:06.842024833 +0000 UTC m=+1605.357822420" lastFinishedPulling="2026-02-27 17:20:07.205936258 +0000 UTC m=+1605.721733865" observedRunningTime="2026-02-27 17:20:07.925055922 +0000 UTC m=+1606.440853539" watchObservedRunningTime="2026-02-27 17:20:07.931423831 +0000 UTC m=+1606.447221448" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.244334 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87947d39-2a62-41d6-836f-b385d2b3ae28" path="/var/lib/kubelet/pods/87947d39-2a62-41d6-836f-b385d2b3ae28/volumes" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.786517 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.856605 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-combined-ca-bundle\") pod \"2d025830-8db8-4719-8ea5-66f9a27d1d42\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.856647 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-config-data\") pod \"2d025830-8db8-4719-8ea5-66f9a27d1d42\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.856757 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-scripts\") pod \"2d025830-8db8-4719-8ea5-66f9a27d1d42\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.856858 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-sg-core-conf-yaml\") pod \"2d025830-8db8-4719-8ea5-66f9a27d1d42\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.856895 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-run-httpd\") pod \"2d025830-8db8-4719-8ea5-66f9a27d1d42\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.856937 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7nj8d\" (UniqueName: \"kubernetes.io/projected/2d025830-8db8-4719-8ea5-66f9a27d1d42-kube-api-access-7nj8d\") pod \"2d025830-8db8-4719-8ea5-66f9a27d1d42\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.857000 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-log-httpd\") pod \"2d025830-8db8-4719-8ea5-66f9a27d1d42\" (UID: \"2d025830-8db8-4719-8ea5-66f9a27d1d42\") " Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.857188 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2d025830-8db8-4719-8ea5-66f9a27d1d42" (UID: "2d025830-8db8-4719-8ea5-66f9a27d1d42"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.857503 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2d025830-8db8-4719-8ea5-66f9a27d1d42" (UID: "2d025830-8db8-4719-8ea5-66f9a27d1d42"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.857577 4708 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.865023 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-scripts" (OuterVolumeSpecName: "scripts") pod "2d025830-8db8-4719-8ea5-66f9a27d1d42" (UID: "2d025830-8db8-4719-8ea5-66f9a27d1d42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.874571 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d025830-8db8-4719-8ea5-66f9a27d1d42-kube-api-access-7nj8d" (OuterVolumeSpecName: "kube-api-access-7nj8d") pod "2d025830-8db8-4719-8ea5-66f9a27d1d42" (UID: "2d025830-8db8-4719-8ea5-66f9a27d1d42"). InnerVolumeSpecName "kube-api-access-7nj8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.924315 4708 generic.go:334] "Generic (PLEG): container finished" podID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerID="b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283" exitCode=0 Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.925516 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.925941 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerDied","Data":"b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283"} Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.926008 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2d025830-8db8-4719-8ea5-66f9a27d1d42","Type":"ContainerDied","Data":"60bfc684bcba65ce0271883a9a1f0879f75122151b819d27cfd6f94ec3e873cf"} Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.926028 4708 scope.go:117] "RemoveContainer" containerID="b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.939177 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2d025830-8db8-4719-8ea5-66f9a27d1d42" (UID: "2d025830-8db8-4719-8ea5-66f9a27d1d42"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.958427 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2d025830-8db8-4719-8ea5-66f9a27d1d42" (UID: "2d025830-8db8-4719-8ea5-66f9a27d1d42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.960399 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.960448 4708 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.960463 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7nj8d\" (UniqueName: \"kubernetes.io/projected/2d025830-8db8-4719-8ea5-66f9a27d1d42-kube-api-access-7nj8d\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.960476 4708 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2d025830-8db8-4719-8ea5-66f9a27d1d42-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:08 crc kubenswrapper[4708]: I0227 17:20:08.960489 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.009994 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-config-data" (OuterVolumeSpecName: "config-data") pod "2d025830-8db8-4719-8ea5-66f9a27d1d42" (UID: "2d025830-8db8-4719-8ea5-66f9a27d1d42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.021691 4708 scope.go:117] "RemoveContainer" containerID="9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.048998 4708 scope.go:117] "RemoveContainer" containerID="b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.064815 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2d025830-8db8-4719-8ea5-66f9a27d1d42-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.072434 4708 scope.go:117] "RemoveContainer" containerID="6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.092438 4708 scope.go:117] "RemoveContainer" containerID="b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973" Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.092890 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973\": container with ID starting with b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973 not found: ID does not exist" containerID="b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.092918 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973"} err="failed to get container status \"b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973\": rpc error: code = NotFound desc = could not find container \"b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973\": container with ID starting with b7f244a7def2d7bb358fdca74dcd9b7c6f2a015328263e2625e18266836f0973 not found: ID does not exist" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.092938 4708 scope.go:117] "RemoveContainer" containerID="9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898" Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.093273 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898\": container with ID starting with 9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898 not found: ID does not exist" containerID="9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.093312 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898"} err="failed to get container status \"9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898\": rpc error: code = NotFound desc = could not find container \"9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898\": container with ID starting with 9d4ea8772f63a02b21dc4d35a6c3ed99c8e0c6e6ae1b46830d2cfb3308f08898 not found: ID does not exist" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.093350 4708 scope.go:117] "RemoveContainer" containerID="b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283" Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.093637 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283\": container with ID starting with b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283 not found: ID does not exist" containerID="b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.093661 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283"} err="failed to get container status \"b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283\": rpc error: code = NotFound desc = could not find container \"b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283\": container with ID starting with b503c15b21177bad93687499491de19332e9d9a81b0419f0e03d5839ab74f283 not found: ID does not exist" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.093677 4708 scope.go:117] "RemoveContainer" containerID="6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20" Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.094078 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20\": container with ID starting with 6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20 not found: ID does not exist" containerID="6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.094123 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20"} err="failed to get container status \"6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20\": rpc error: code = NotFound desc = could not find container \"6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20\": container with ID starting with 6db89fd7589a05cbcea565348b7ad9e44284c9251b3084c6941ab1394cbb1b20 not found: ID does not exist" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.291160 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.307731 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.316658 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.317058 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-notification-agent" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317074 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-notification-agent" Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.317096 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="sg-core" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317103 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="sg-core" Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.317127 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-central-agent" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317135 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-central-agent" Feb 27 17:20:09 crc kubenswrapper[4708]: E0227 17:20:09.317147 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="proxy-httpd" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317153 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="proxy-httpd" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317405 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="proxy-httpd" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317432 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-central-agent" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317444 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="sg-core" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.317457 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" containerName="ceilometer-notification-agent" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.322970 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.329137 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.329379 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.329568 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.350003 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.374682 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.374718 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.374751 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-log-httpd\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.374771 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp8rf\" (UniqueName: \"kubernetes.io/projected/94b16cd7-7b50-4227-9477-98fff88f91f0-kube-api-access-pp8rf\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.374787 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.374893 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-scripts\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.375007 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-run-httpd\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.375119 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-config-data\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477303 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477377 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-log-httpd\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477445 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp8rf\" (UniqueName: \"kubernetes.io/projected/94b16cd7-7b50-4227-9477-98fff88f91f0-kube-api-access-pp8rf\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477475 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477498 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-scripts\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477584 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-run-httpd\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477669 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-config-data\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477771 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.477807 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-log-httpd\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.478042 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-run-httpd\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.482120 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.483315 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-scripts\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.484191 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.485414 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.485498 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-config-data\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.493153 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp8rf\" (UniqueName: \"kubernetes.io/projected/94b16cd7-7b50-4227-9477-98fff88f91f0-kube-api-access-pp8rf\") pod \"ceilometer-0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " pod="openstack/ceilometer-0" Feb 27 17:20:09 crc kubenswrapper[4708]: I0227 17:20:09.662795 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:10 crc kubenswrapper[4708]: I0227 17:20:10.179470 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:10 crc kubenswrapper[4708]: I0227 17:20:10.245888 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d025830-8db8-4719-8ea5-66f9a27d1d42" path="/var/lib/kubelet/pods/2d025830-8db8-4719-8ea5-66f9a27d1d42/volumes" Feb 27 17:20:10 crc kubenswrapper[4708]: I0227 17:20:10.395182 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdvx5" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="registry-server" probeResult="failure" output=< Feb 27 17:20:10 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:20:10 crc kubenswrapper[4708]: > Feb 27 17:20:10 crc kubenswrapper[4708]: I0227 17:20:10.951578 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerStarted","Data":"3c53a248c9576e1eb37f13b06006f1fcabaa878836e8f86f423837ae1eb7e92b"} Feb 27 17:20:10 crc kubenswrapper[4708]: I0227 17:20:10.951948 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerStarted","Data":"a74361736af7a641771278086f273d92dfa4024d2eef94631332791446804199"} Feb 27 17:20:11 crc kubenswrapper[4708]: I0227 17:20:11.975276 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerStarted","Data":"e1c74c6dfc8a8caa3d496b6ac28842e3487ba18895902adb0caa2f314b8b1e98"} Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.783169 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-lhfzc"] Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.800603 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-lhfzc"] Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.889907 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-db-sync-nrwjt"] Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.891285 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.893662 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.916740 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-nrwjt"] Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.964906 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-combined-ca-bundle\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.964959 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-config-data\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.964997 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-certs\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.965018 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-scripts\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:12 crc kubenswrapper[4708]: I0227 17:20:12.965064 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2cb\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-kube-api-access-9j2cb\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.010186 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerStarted","Data":"d828cadcaa795ea50ed65e65565c4b7585bc64c1d78927a2778145e023312e9a"} Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.067678 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j2cb\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-kube-api-access-9j2cb\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.068438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-combined-ca-bundle\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.068498 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-config-data\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.068560 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-certs\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.068590 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-scripts\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.073614 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-certs\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.075796 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-config-data\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.076152 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-scripts\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.076354 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-combined-ca-bundle\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.086128 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j2cb\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-kube-api-access-9j2cb\") pod \"cloudkitty-db-sync-nrwjt\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.212911 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:13 crc kubenswrapper[4708]: I0227 17:20:13.699463 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-db-sync-nrwjt"] Feb 27 17:20:14 crc kubenswrapper[4708]: I0227 17:20:14.021360 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nrwjt" event={"ID":"c553d876-99a3-4aed-b8ce-5b7ea04f17d5","Type":"ContainerStarted","Data":"1db88b7c7bd7a53a049697844590a5392894e5537ee5cfb4259650b9459772c4"} Feb 27 17:20:14 crc kubenswrapper[4708]: I0227 17:20:14.240233 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76e1fee2-5549-44d4-aaab-c70ad0fb083e" path="/var/lib/kubelet/pods/76e1fee2-5549-44d4-aaab-c70ad0fb083e/volumes" Feb 27 17:20:14 crc kubenswrapper[4708]: I0227 17:20:14.345267 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:20:15 crc kubenswrapper[4708]: I0227 17:20:15.043920 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerStarted","Data":"61144aee7e61df3ff84af96b8c4fa92e66f7bacfe397e03c4b8e9d5de139cb76"} Feb 27 17:20:15 crc kubenswrapper[4708]: I0227 17:20:15.045214 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:20:15 crc kubenswrapper[4708]: I0227 17:20:15.046908 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nrwjt" event={"ID":"c553d876-99a3-4aed-b8ce-5b7ea04f17d5","Type":"ContainerStarted","Data":"628ff3379399863dc641f831171bf611437414c1a3bfa51473e1a3f7b4e5e468"} Feb 27 17:20:15 crc kubenswrapper[4708]: I0227 17:20:15.145719 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-db-sync-nrwjt" podStartSLOduration=2.953214824 podStartE2EDuration="3.145700682s" podCreationTimestamp="2026-02-27 17:20:12 +0000 UTC" firstStartedPulling="2026-02-27 17:20:13.701956649 +0000 UTC m=+1612.217754236" lastFinishedPulling="2026-02-27 17:20:13.894442467 +0000 UTC m=+1612.410240094" observedRunningTime="2026-02-27 17:20:15.145145387 +0000 UTC m=+1613.660942974" watchObservedRunningTime="2026-02-27 17:20:15.145700682 +0000 UTC m=+1613.661498269" Feb 27 17:20:15 crc kubenswrapper[4708]: I0227 17:20:15.149253 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.466774229 podStartE2EDuration="6.149244872s" podCreationTimestamp="2026-02-27 17:20:09 +0000 UTC" firstStartedPulling="2026-02-27 17:20:10.186802097 +0000 UTC m=+1608.702599724" lastFinishedPulling="2026-02-27 17:20:13.86927278 +0000 UTC m=+1612.385070367" observedRunningTime="2026-02-27 17:20:15.112209191 +0000 UTC m=+1613.628006778" watchObservedRunningTime="2026-02-27 17:20:15.149244872 +0000 UTC m=+1613.665042459" Feb 27 17:20:15 crc kubenswrapper[4708]: I0227 17:20:15.683119 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:16 crc kubenswrapper[4708]: I0227 17:20:16.193174 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:20:16 crc kubenswrapper[4708]: I0227 17:20:16.383992 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Feb 27 17:20:17 crc kubenswrapper[4708]: I0227 17:20:17.067575 4708 generic.go:334] "Generic (PLEG): container finished" podID="c553d876-99a3-4aed-b8ce-5b7ea04f17d5" containerID="628ff3379399863dc641f831171bf611437414c1a3bfa51473e1a3f7b4e5e468" exitCode=0 Feb 27 17:20:17 crc kubenswrapper[4708]: I0227 17:20:17.068015 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-central-agent" containerID="cri-o://3c53a248c9576e1eb37f13b06006f1fcabaa878836e8f86f423837ae1eb7e92b" gracePeriod=30 Feb 27 17:20:17 crc kubenswrapper[4708]: I0227 17:20:17.067732 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nrwjt" event={"ID":"c553d876-99a3-4aed-b8ce-5b7ea04f17d5","Type":"ContainerDied","Data":"628ff3379399863dc641f831171bf611437414c1a3bfa51473e1a3f7b4e5e468"} Feb 27 17:20:17 crc kubenswrapper[4708]: I0227 17:20:17.068490 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="proxy-httpd" containerID="cri-o://61144aee7e61df3ff84af96b8c4fa92e66f7bacfe397e03c4b8e9d5de139cb76" gracePeriod=30 Feb 27 17:20:17 crc kubenswrapper[4708]: I0227 17:20:17.068549 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="sg-core" containerID="cri-o://d828cadcaa795ea50ed65e65565c4b7585bc64c1d78927a2778145e023312e9a" gracePeriod=30 Feb 27 17:20:17 crc kubenswrapper[4708]: I0227 17:20:17.068582 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-notification-agent" containerID="cri-o://e1c74c6dfc8a8caa3d496b6ac28842e3487ba18895902adb0caa2f314b8b1e98" gracePeriod=30 Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.077755 4708 generic.go:334] "Generic (PLEG): container finished" podID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerID="61144aee7e61df3ff84af96b8c4fa92e66f7bacfe397e03c4b8e9d5de139cb76" exitCode=0 Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.078015 4708 generic.go:334] "Generic (PLEG): container finished" podID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerID="d828cadcaa795ea50ed65e65565c4b7585bc64c1d78927a2778145e023312e9a" exitCode=2 Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.078025 4708 generic.go:334] "Generic (PLEG): container finished" podID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerID="e1c74c6dfc8a8caa3d496b6ac28842e3487ba18895902adb0caa2f314b8b1e98" exitCode=0 Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.078033 4708 generic.go:334] "Generic (PLEG): container finished" podID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerID="3c53a248c9576e1eb37f13b06006f1fcabaa878836e8f86f423837ae1eb7e92b" exitCode=0 Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.078078 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerDied","Data":"61144aee7e61df3ff84af96b8c4fa92e66f7bacfe397e03c4b8e9d5de139cb76"} Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.078131 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerDied","Data":"d828cadcaa795ea50ed65e65565c4b7585bc64c1d78927a2778145e023312e9a"} Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.078143 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerDied","Data":"e1c74c6dfc8a8caa3d496b6ac28842e3487ba18895902adb0caa2f314b8b1e98"} Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.078156 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerDied","Data":"3c53a248c9576e1eb37f13b06006f1fcabaa878836e8f86f423837ae1eb7e92b"} Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.165143 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.275693 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-log-httpd\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.275736 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-sg-core-conf-yaml\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.275777 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-run-httpd\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.275795 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-combined-ca-bundle\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.275867 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-scripts\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.275927 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-ceilometer-tls-certs\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.275986 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-config-data\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.276032 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp8rf\" (UniqueName: \"kubernetes.io/projected/94b16cd7-7b50-4227-9477-98fff88f91f0-kube-api-access-pp8rf\") pod \"94b16cd7-7b50-4227-9477-98fff88f91f0\" (UID: \"94b16cd7-7b50-4227-9477-98fff88f91f0\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.276218 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.276602 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.278042 4708 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-run-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.278065 4708 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/94b16cd7-7b50-4227-9477-98fff88f91f0-log-httpd\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.316981 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-scripts" (OuterVolumeSpecName: "scripts") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.317104 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94b16cd7-7b50-4227-9477-98fff88f91f0-kube-api-access-pp8rf" (OuterVolumeSpecName: "kube-api-access-pp8rf") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "kube-api-access-pp8rf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.342875 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.371526 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.385731 4708 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.385764 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.385773 4708 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.385783 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pp8rf\" (UniqueName: \"kubernetes.io/projected/94b16cd7-7b50-4227-9477-98fff88f91f0-kube-api-access-pp8rf\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.433789 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.443109 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-config-data" (OuterVolumeSpecName: "config-data") pod "94b16cd7-7b50-4227-9477-98fff88f91f0" (UID: "94b16cd7-7b50-4227-9477-98fff88f91f0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.478061 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.488061 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.488094 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/94b16cd7-7b50-4227-9477-98fff88f91f0-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.588833 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-combined-ca-bundle\") pod \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.588951 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-certs\") pod \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.589186 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j2cb\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-kube-api-access-9j2cb\") pod \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.589212 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-config-data\") pod \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.589234 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-scripts\") pod \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\" (UID: \"c553d876-99a3-4aed-b8ce-5b7ea04f17d5\") " Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.591865 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-kube-api-access-9j2cb" (OuterVolumeSpecName: "kube-api-access-9j2cb") pod "c553d876-99a3-4aed-b8ce-5b7ea04f17d5" (UID: "c553d876-99a3-4aed-b8ce-5b7ea04f17d5"). InnerVolumeSpecName "kube-api-access-9j2cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.592264 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-scripts" (OuterVolumeSpecName: "scripts") pod "c553d876-99a3-4aed-b8ce-5b7ea04f17d5" (UID: "c553d876-99a3-4aed-b8ce-5b7ea04f17d5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.593906 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-certs" (OuterVolumeSpecName: "certs") pod "c553d876-99a3-4aed-b8ce-5b7ea04f17d5" (UID: "c553d876-99a3-4aed-b8ce-5b7ea04f17d5"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.618067 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c553d876-99a3-4aed-b8ce-5b7ea04f17d5" (UID: "c553d876-99a3-4aed-b8ce-5b7ea04f17d5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.633659 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-config-data" (OuterVolumeSpecName: "config-data") pod "c553d876-99a3-4aed-b8ce-5b7ea04f17d5" (UID: "c553d876-99a3-4aed-b8ce-5b7ea04f17d5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.640718 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="32b89444-fadf-43c8-b552-e5071fc91481" containerName="rabbitmq" containerID="cri-o://1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7" gracePeriod=604796 Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.692554 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j2cb\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-kube-api-access-9j2cb\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.692596 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.692610 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.692623 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:18 crc kubenswrapper[4708]: I0227 17:20:18.692635 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/c553d876-99a3-4aed-b8ce-5b7ea04f17d5-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.090124 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-db-sync-nrwjt" event={"ID":"c553d876-99a3-4aed-b8ce-5b7ea04f17d5","Type":"ContainerDied","Data":"1db88b7c7bd7a53a049697844590a5392894e5537ee5cfb4259650b9459772c4"} Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.090402 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1db88b7c7bd7a53a049697844590a5392894e5537ee5cfb4259650b9459772c4" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.090147 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-db-sync-nrwjt" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.094085 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"94b16cd7-7b50-4227-9477-98fff88f91f0","Type":"ContainerDied","Data":"a74361736af7a641771278086f273d92dfa4024d2eef94631332791446804199"} Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.094127 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.094153 4708 scope.go:117] "RemoveContainer" containerID="61144aee7e61df3ff84af96b8c4fa92e66f7bacfe397e03c4b8e9d5de139cb76" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.118410 4708 scope.go:117] "RemoveContainer" containerID="d828cadcaa795ea50ed65e65565c4b7585bc64c1d78927a2778145e023312e9a" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.135832 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.141874 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.161512 4708 scope.go:117] "RemoveContainer" containerID="e1c74c6dfc8a8caa3d496b6ac28842e3487ba18895902adb0caa2f314b8b1e98" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.167517 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:19 crc kubenswrapper[4708]: E0227 17:20:19.168201 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-central-agent" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168216 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-central-agent" Feb 27 17:20:19 crc kubenswrapper[4708]: E0227 17:20:19.168232 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c553d876-99a3-4aed-b8ce-5b7ea04f17d5" containerName="cloudkitty-db-sync" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168238 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c553d876-99a3-4aed-b8ce-5b7ea04f17d5" containerName="cloudkitty-db-sync" Feb 27 17:20:19 crc kubenswrapper[4708]: E0227 17:20:19.168250 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="sg-core" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168256 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="sg-core" Feb 27 17:20:19 crc kubenswrapper[4708]: E0227 17:20:19.168267 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="proxy-httpd" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168274 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="proxy-httpd" Feb 27 17:20:19 crc kubenswrapper[4708]: E0227 17:20:19.168295 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-notification-agent" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168301 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-notification-agent" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168476 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c553d876-99a3-4aed-b8ce-5b7ea04f17d5" containerName="cloudkitty-db-sync" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168488 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="sg-core" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168501 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-notification-agent" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168519 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="ceilometer-central-agent" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.168529 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" containerName="proxy-httpd" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.171724 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.182445 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.182699 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.184678 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.187901 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.194970 4708 scope.go:117] "RemoveContainer" containerID="3c53a248c9576e1eb37f13b06006f1fcabaa878836e8f86f423837ae1eb7e92b" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.317857 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.317955 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.318007 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-scripts\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.318046 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.318135 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9sxtd\" (UniqueName: \"kubernetes.io/projected/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-kube-api-access-9sxtd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.318244 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-config-data\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.318264 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.318279 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421077 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9sxtd\" (UniqueName: \"kubernetes.io/projected/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-kube-api-access-9sxtd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421178 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-config-data\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421199 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421213 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421275 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421302 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421324 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-scripts\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421350 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421797 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-run-httpd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.421876 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-log-httpd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.426109 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.426665 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.426953 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-config-data\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.427276 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.441365 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-scripts\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.442358 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9sxtd\" (UniqueName: \"kubernetes.io/projected/c1ce78ce-446b-4a42-bd4f-59fe2264e7c2-kube-api-access-9sxtd\") pod \"ceilometer-0\" (UID: \"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2\") " pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.554436 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.560857 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-5p4n2"] Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.569522 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-5p4n2"] Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.656286 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-storageinit-4sj27"] Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.657765 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.668224 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-4sj27"] Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.670011 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.827801 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-combined-ca-bundle\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.827965 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-scripts\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.828375 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-certs\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.828512 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-config-data\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.828589 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6bjw\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-kube-api-access-d6bjw\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.930409 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-certs\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.930490 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-config-data\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.930542 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6bjw\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-kube-api-access-d6bjw\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.930619 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-combined-ca-bundle\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.930657 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-scripts\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.937608 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-scripts\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.937609 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-combined-ca-bundle\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.937626 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-certs\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.938601 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-config-data\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.943750 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6bjw\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-kube-api-access-d6bjw\") pod \"cloudkitty-storageinit-4sj27\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:19 crc kubenswrapper[4708]: I0227 17:20:19.974242 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.076214 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.110065 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2","Type":"ContainerStarted","Data":"d971cded9b3eca32c52fbf5b627c308a0653c8e5417e65144400ed5de7203cde"} Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.228081 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:20:20 crc kubenswrapper[4708]: E0227 17:20:20.228394 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.238757 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5375b346-0435-45a2-bc67-f966299a9f4f" path="/var/lib/kubelet/pods/5375b346-0435-45a2-bc67-f966299a9f4f/volumes" Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.239346 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94b16cd7-7b50-4227-9477-98fff88f91f0" path="/var/lib/kubelet/pods/94b16cd7-7b50-4227-9477-98fff88f91f0/volumes" Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.386085 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cdvx5" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="registry-server" probeResult="failure" output=< Feb 27 17:20:20 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:20:20 crc kubenswrapper[4708]: > Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.487600 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-storageinit-4sj27"] Feb 27 17:20:20 crc kubenswrapper[4708]: I0227 17:20:20.511929 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerName="rabbitmq" containerID="cri-o://d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd" gracePeriod=604796 Feb 27 17:20:21 crc kubenswrapper[4708]: I0227 17:20:21.129609 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4sj27" event={"ID":"a789b4af-a0dc-41c9-907f-92f896befb9a","Type":"ContainerStarted","Data":"f3d79925d3c93f6bbe5d80d792d5683c9ab04a3a48f1cadd41d8d15df04950bc"} Feb 27 17:20:21 crc kubenswrapper[4708]: I0227 17:20:21.129921 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4sj27" event={"ID":"a789b4af-a0dc-41c9-907f-92f896befb9a","Type":"ContainerStarted","Data":"a682620b7268781ae7f7fa5d2021eea36fb42b7074a55962b221e4de16ee473c"} Feb 27 17:20:21 crc kubenswrapper[4708]: I0227 17:20:21.164435 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-storageinit-4sj27" podStartSLOduration=2.164414723 podStartE2EDuration="2.164414723s" podCreationTimestamp="2026-02-27 17:20:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:20:21.15292164 +0000 UTC m=+1619.668719247" watchObservedRunningTime="2026-02-27 17:20:21.164414723 +0000 UTC m=+1619.680212320" Feb 27 17:20:23 crc kubenswrapper[4708]: I0227 17:20:23.188476 4708 generic.go:334] "Generic (PLEG): container finished" podID="a789b4af-a0dc-41c9-907f-92f896befb9a" containerID="f3d79925d3c93f6bbe5d80d792d5683c9ab04a3a48f1cadd41d8d15df04950bc" exitCode=0 Feb 27 17:20:23 crc kubenswrapper[4708]: I0227 17:20:23.189013 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4sj27" event={"ID":"a789b4af-a0dc-41c9-907f-92f896befb9a","Type":"ContainerDied","Data":"f3d79925d3c93f6bbe5d80d792d5683c9ab04a3a48f1cadd41d8d15df04950bc"} Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.213524 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2","Type":"ContainerStarted","Data":"7e6b29bd8ce04286835dc1c9e491d072dea3c386341a080ddda60d5d344281b7"} Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.690717 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.847756 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-combined-ca-bundle\") pod \"a789b4af-a0dc-41c9-907f-92f896befb9a\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.847924 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-scripts\") pod \"a789b4af-a0dc-41c9-907f-92f896befb9a\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.848402 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-config-data\") pod \"a789b4af-a0dc-41c9-907f-92f896befb9a\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.848485 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6bjw\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-kube-api-access-d6bjw\") pod \"a789b4af-a0dc-41c9-907f-92f896befb9a\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.848511 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-certs\") pod \"a789b4af-a0dc-41c9-907f-92f896befb9a\" (UID: \"a789b4af-a0dc-41c9-907f-92f896befb9a\") " Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.853101 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-certs" (OuterVolumeSpecName: "certs") pod "a789b4af-a0dc-41c9-907f-92f896befb9a" (UID: "a789b4af-a0dc-41c9-907f-92f896befb9a"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.853105 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-scripts" (OuterVolumeSpecName: "scripts") pod "a789b4af-a0dc-41c9-907f-92f896befb9a" (UID: "a789b4af-a0dc-41c9-907f-92f896befb9a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.854052 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-kube-api-access-d6bjw" (OuterVolumeSpecName: "kube-api-access-d6bjw") pod "a789b4af-a0dc-41c9-907f-92f896befb9a" (UID: "a789b4af-a0dc-41c9-907f-92f896befb9a"). InnerVolumeSpecName "kube-api-access-d6bjw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.879532 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-config-data" (OuterVolumeSpecName: "config-data") pod "a789b4af-a0dc-41c9-907f-92f896befb9a" (UID: "a789b4af-a0dc-41c9-907f-92f896befb9a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.879603 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a789b4af-a0dc-41c9-907f-92f896befb9a" (UID: "a789b4af-a0dc-41c9-907f-92f896befb9a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.960256 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.961483 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.961523 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6bjw\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-kube-api-access-d6bjw\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.961533 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/a789b4af-a0dc-41c9-907f-92f896befb9a-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:24 crc kubenswrapper[4708]: I0227 17:20:24.961542 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a789b4af-a0dc-41c9-907f-92f896befb9a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.233164 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.235401 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2","Type":"ContainerStarted","Data":"e5d5ff65020fb6de8ef704d755ddab6372967cbf68630056011b51f0417197b5"} Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.239074 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-storageinit-4sj27" event={"ID":"a789b4af-a0dc-41c9-907f-92f896befb9a","Type":"ContainerDied","Data":"a682620b7268781ae7f7fa5d2021eea36fb42b7074a55962b221e4de16ee473c"} Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.239115 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a682620b7268781ae7f7fa5d2021eea36fb42b7074a55962b221e4de16ee473c" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.239186 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-storageinit-4sj27" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.251669 4708 generic.go:334] "Generic (PLEG): container finished" podID="32b89444-fadf-43c8-b552-e5071fc91481" containerID="1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7" exitCode=0 Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.251711 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32b89444-fadf-43c8-b552-e5071fc91481","Type":"ContainerDied","Data":"1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7"} Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.251742 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"32b89444-fadf-43c8-b552-e5071fc91481","Type":"ContainerDied","Data":"6edf45486a2fdec51bf1be93b35309afd6be0bacf3d91179bfc684e86c59caa6"} Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.251762 4708 scope.go:117] "RemoveContainer" containerID="1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.251911 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.349941 4708 scope.go:117] "RemoveContainer" containerID="8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368192 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-tls\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368228 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tq2z\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-kube-api-access-4tq2z\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368261 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-erlang-cookie\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368283 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-config-data\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368338 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-server-conf\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368359 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-confd\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368372 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-plugins\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368431 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32b89444-fadf-43c8-b552-e5071fc91481-pod-info\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368451 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32b89444-fadf-43c8-b552-e5071fc91481-erlang-cookie-secret\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.368487 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-plugins-conf\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.369402 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"32b89444-fadf-43c8-b552-e5071fc91481\" (UID: \"32b89444-fadf-43c8-b552-e5071fc91481\") " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.382020 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.388365 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-kube-api-access-4tq2z" (OuterVolumeSpecName: "kube-api-access-4tq2z") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "kube-api-access-4tq2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.392031 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/32b89444-fadf-43c8-b552-e5071fc91481-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.398010 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.400913 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.402276 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.407738 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/32b89444-fadf-43c8-b552-e5071fc91481-pod-info" (OuterVolumeSpecName: "pod-info") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.412046 4708 scope.go:117] "RemoveContainer" containerID="1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7" Feb 27 17:20:25 crc kubenswrapper[4708]: E0227 17:20:25.414763 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7\": container with ID starting with 1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7 not found: ID does not exist" containerID="1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.414796 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7"} err="failed to get container status \"1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7\": rpc error: code = NotFound desc = could not find container \"1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7\": container with ID starting with 1ca88ce2694172c59a6608e1e8fb608a1335235e3e507e2c478559eb8a4c31f7 not found: ID does not exist" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.414820 4708 scope.go:117] "RemoveContainer" containerID="8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8" Feb 27 17:20:25 crc kubenswrapper[4708]: E0227 17:20:25.415784 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8\": container with ID starting with 8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8 not found: ID does not exist" containerID="8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.415813 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8"} err="failed to get container status \"8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8\": rpc error: code = NotFound desc = could not find container \"8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8\": container with ID starting with 8d25a437ba280e82ae6ccb8c17682dd2c1e48ce39e30a689e3c5b1b70467c5c8 not found: ID does not exist" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.419560 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.419750 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-proc-0" podUID="607cf703-5051-4836-92bd-657dbab39bd4" containerName="cloudkitty-proc" containerID="cri-o://0acb70ddb957b8a9e114d0e8c7558e2447b45e540454c2b0933936cd17f84728" gracePeriod=30 Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.440952 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e" (OuterVolumeSpecName: "persistence") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "pvc-cef6b270-aab7-4584-ad7b-65bf989b764e". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.456990 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.458160 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api-log" containerID="cri-o://88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d" gracePeriod=30 Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.462286 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cloudkitty-api-0" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api" containerID="cri-o://33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6" gracePeriod=30 Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475074 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") on node \"crc\" " Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475105 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475115 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4tq2z\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-kube-api-access-4tq2z\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475125 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475133 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475141 4708 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/32b89444-fadf-43c8-b552-e5071fc91481-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475149 4708 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/32b89444-fadf-43c8-b552-e5071fc91481-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.475157 4708 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.505896 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-config-data" (OuterVolumeSpecName: "config-data") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.547233 4708 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.548469 4708 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-cef6b270-aab7-4584-ad7b-65bf989b764e" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e") on node "crc" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.549552 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-server-conf" (OuterVolumeSpecName: "server-conf") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.578686 4708 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.578714 4708 reconciler_common.go:293] "Volume detached for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.578724 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/32b89444-fadf-43c8-b552-e5071fc91481-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.592529 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "32b89444-fadf-43c8-b552-e5071fc91481" (UID: "32b89444-fadf-43c8-b552-e5071fc91481"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.680185 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/32b89444-fadf-43c8-b552-e5071fc91481-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.883714 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.894325 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.915838 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:20:25 crc kubenswrapper[4708]: E0227 17:20:25.916265 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b89444-fadf-43c8-b552-e5071fc91481" containerName="setup-container" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.916279 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b89444-fadf-43c8-b552-e5071fc91481" containerName="setup-container" Feb 27 17:20:25 crc kubenswrapper[4708]: E0227 17:20:25.916287 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="32b89444-fadf-43c8-b552-e5071fc91481" containerName="rabbitmq" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.916293 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b89444-fadf-43c8-b552-e5071fc91481" containerName="rabbitmq" Feb 27 17:20:25 crc kubenswrapper[4708]: E0227 17:20:25.916325 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a789b4af-a0dc-41c9-907f-92f896befb9a" containerName="cloudkitty-storageinit" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.916331 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a789b4af-a0dc-41c9-907f-92f896befb9a" containerName="cloudkitty-storageinit" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.916507 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a789b4af-a0dc-41c9-907f-92f896befb9a" containerName="cloudkitty-storageinit" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.916537 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="32b89444-fadf-43c8-b552-e5071fc91481" containerName="rabbitmq" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.917631 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.920253 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-zgrwx" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.921091 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.921989 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.922290 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.922539 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.923160 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.923513 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Feb 27 17:20:25 crc kubenswrapper[4708]: I0227 17:20:25.996760 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087170 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-config-data\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087212 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087247 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087323 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087378 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087473 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/866e4edf-2f8a-4c4b-9caf-54ad03011231-pod-info\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087493 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087530 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgk9d\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-kube-api-access-tgk9d\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087672 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-server-conf\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087730 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/866e4edf-2f8a-4c4b-9caf-54ad03011231-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.087916 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.189970 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190059 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/866e4edf-2f8a-4c4b-9caf-54ad03011231-pod-info\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190082 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190123 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgk9d\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-kube-api-access-tgk9d\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190174 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-server-conf\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190190 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/866e4edf-2f8a-4c4b-9caf-54ad03011231-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190233 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190264 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-config-data\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190285 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190312 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.190334 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.191384 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.191622 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.192129 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-server-conf\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.192353 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.192834 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/866e4edf-2f8a-4c4b-9caf-54ad03011231-config-data\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.194590 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/866e4edf-2f8a-4c4b-9caf-54ad03011231-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.194694 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.195100 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/866e4edf-2f8a-4c4b-9caf-54ad03011231-pod-info\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.198344 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.200754 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.200781 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/ef34f33f9707a5269ee06d9790943c794b4a35585830a0fadfbdb657babc33a0/globalmount\"" pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.220749 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgk9d\" (UniqueName: \"kubernetes.io/projected/866e4edf-2f8a-4c4b-9caf-54ad03011231-kube-api-access-tgk9d\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.246414 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b89444-fadf-43c8-b552-e5071fc91481" path="/var/lib/kubelet/pods/32b89444-fadf-43c8-b552-e5071fc91481/volumes" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.271182 4708 generic.go:334] "Generic (PLEG): container finished" podID="0d761a40-6a7c-4691-a079-919d74122b18" containerID="88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d" exitCode=143 Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.271235 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0d761a40-6a7c-4691-a079-919d74122b18","Type":"ContainerDied","Data":"88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d"} Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.278214 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-cef6b270-aab7-4584-ad7b-65bf989b764e\") pod \"rabbitmq-server-0\" (UID: \"866e4edf-2f8a-4c4b-9caf-54ad03011231\") " pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.281434 4708 generic.go:334] "Generic (PLEG): container finished" podID="607cf703-5051-4836-92bd-657dbab39bd4" containerID="0acb70ddb957b8a9e114d0e8c7558e2447b45e540454c2b0933936cd17f84728" exitCode=0 Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.281473 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"607cf703-5051-4836-92bd-657dbab39bd4","Type":"ContainerDied","Data":"0acb70ddb957b8a9e114d0e8c7558e2447b45e540454c2b0933936cd17f84728"} Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.284835 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2","Type":"ContainerStarted","Data":"d57623a0fe9753fc8300e3a1f43a5fb4a131ef9f525bbf4ed499bf6f2ccd2571"} Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.539493 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Feb 27 17:20:26 crc kubenswrapper[4708]: I0227 17:20:26.859794 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.001067 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.014370 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data-custom\") pod \"607cf703-5051-4836-92bd-657dbab39bd4\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.014479 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-certs\") pod \"607cf703-5051-4836-92bd-657dbab39bd4\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.014564 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-combined-ca-bundle\") pod \"607cf703-5051-4836-92bd-657dbab39bd4\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.014608 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data\") pod \"607cf703-5051-4836-92bd-657dbab39bd4\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.014692 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tndks\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-kube-api-access-tndks\") pod \"607cf703-5051-4836-92bd-657dbab39bd4\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.014720 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-scripts\") pod \"607cf703-5051-4836-92bd-657dbab39bd4\" (UID: \"607cf703-5051-4836-92bd-657dbab39bd4\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.037923 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "607cf703-5051-4836-92bd-657dbab39bd4" (UID: "607cf703-5051-4836-92bd-657dbab39bd4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.038401 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-certs" (OuterVolumeSpecName: "certs") pod "607cf703-5051-4836-92bd-657dbab39bd4" (UID: "607cf703-5051-4836-92bd-657dbab39bd4"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.043599 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-kube-api-access-tndks" (OuterVolumeSpecName: "kube-api-access-tndks") pod "607cf703-5051-4836-92bd-657dbab39bd4" (UID: "607cf703-5051-4836-92bd-657dbab39bd4"). InnerVolumeSpecName "kube-api-access-tndks". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.049608 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-scripts" (OuterVolumeSpecName: "scripts") pod "607cf703-5051-4836-92bd-657dbab39bd4" (UID: "607cf703-5051-4836-92bd-657dbab39bd4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.065772 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data" (OuterVolumeSpecName: "config-data") pod "607cf703-5051-4836-92bd-657dbab39bd4" (UID: "607cf703-5051-4836-92bd-657dbab39bd4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.067185 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "607cf703-5051-4836-92bd-657dbab39bd4" (UID: "607cf703-5051-4836-92bd-657dbab39bd4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.116668 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-public-tls-certs\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.116813 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-scripts\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.116999 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-combined-ca-bundle\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.117090 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d761a40-6a7c-4691-a079-919d74122b18-logs\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.117149 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-internal-tls-certs\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.117225 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pf5h\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-kube-api-access-7pf5h\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.117288 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-certs\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.117325 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.117350 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data-custom\") pod \"0d761a40-6a7c-4691-a079-919d74122b18\" (UID: \"0d761a40-6a7c-4691-a079-919d74122b18\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.117723 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d761a40-6a7c-4691-a079-919d74122b18-logs" (OuterVolumeSpecName: "logs") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.118794 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.118820 4708 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0d761a40-6a7c-4691-a079-919d74122b18-logs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.118834 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.118906 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.118920 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tndks\" (UniqueName: \"kubernetes.io/projected/607cf703-5051-4836-92bd-657dbab39bd4-kube-api-access-tndks\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.118932 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.118944 4708 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/607cf703-5051-4836-92bd-657dbab39bd4-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.122986 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-kube-api-access-7pf5h" (OuterVolumeSpecName: "kube-api-access-7pf5h") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "kube-api-access-7pf5h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.123511 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-scripts" (OuterVolumeSpecName: "scripts") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.123528 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.125277 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-certs" (OuterVolumeSpecName: "certs") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.180239 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.193982 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.203743 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data" (OuterVolumeSpecName: "config-data") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.221279 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pf5h\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-kube-api-access-7pf5h\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.221313 4708 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/projected/0d761a40-6a7c-4691-a079-919d74122b18-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.221326 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.221336 4708 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-config-data-custom\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.221346 4708 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-scripts\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.221355 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.243784 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.252154 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.262027 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "0d761a40-6a7c-4691-a079-919d74122b18" (UID: "0d761a40-6a7c-4691-a079-919d74122b18"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.322381 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb2fe191-cb57-46a6-9797-c9890640ff74-erlang-cookie-secret\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326548 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326631 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-config-data\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326661 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-server-conf\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326759 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb2fe191-cb57-46a6-9797-c9890640ff74-pod-info\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326777 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rht9n\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-kube-api-access-rht9n\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326860 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-plugins\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326889 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-tls\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326915 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-erlang-cookie\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.326962 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-confd\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.327688 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-plugins-conf\") pod \"eb2fe191-cb57-46a6-9797-c9890640ff74\" (UID: \"eb2fe191-cb57-46a6-9797-c9890640ff74\") " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.328458 4708 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.328476 4708 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/0d761a40-6a7c-4691-a079-919d74122b18-public-tls-certs\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.329922 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb2fe191-cb57-46a6-9797-c9890640ff74-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.329951 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.333866 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"607cf703-5051-4836-92bd-657dbab39bd4","Type":"ContainerDied","Data":"a9f71fe981ad286a34f9714080e0779b6e69ec09cd4b41eaef90962c4a9fcef1"} Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.333915 4708 scope.go:117] "RemoveContainer" containerID="0acb70ddb957b8a9e114d0e8c7558e2447b45e540454c2b0933936cd17f84728" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.334039 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.335366 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.342388 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.344951 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.347415 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"866e4edf-2f8a-4c4b-9caf-54ad03011231","Type":"ContainerStarted","Data":"8a6aa26d988bf6ecfdf5df8f5833a7b8e9ab93948eecbfe6a360be795a70a500"} Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.360442 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-kube-api-access-rht9n" (OuterVolumeSpecName: "kube-api-access-rht9n") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "kube-api-access-rht9n". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.360610 4708 generic.go:334] "Generic (PLEG): container finished" podID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerID="d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd" exitCode=0 Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.360649 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eb2fe191-cb57-46a6-9797-c9890640ff74","Type":"ContainerDied","Data":"d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd"} Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.360687 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.360696 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"eb2fe191-cb57-46a6-9797-c9890640ff74","Type":"ContainerDied","Data":"7e7febe31afdc5a8f3d4c0db2807844a2964205eec12a027c4aadf41c269de15"} Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.361309 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/eb2fe191-cb57-46a6-9797-c9890640ff74-pod-info" (OuterVolumeSpecName: "pod-info") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.367819 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135" (OuterVolumeSpecName: "persistence") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "pvc-b24d01da-b002-4c89-a426-a8dd80e44135". PluginName "kubernetes.io/csi", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.384503 4708 generic.go:334] "Generic (PLEG): container finished" podID="0d761a40-6a7c-4691-a079-919d74122b18" containerID="33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6" exitCode=0 Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.384551 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0d761a40-6a7c-4691-a079-919d74122b18","Type":"ContainerDied","Data":"33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6"} Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.384579 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"0d761a40-6a7c-4691-a079-919d74122b18","Type":"ContainerDied","Data":"9ece77b3e595b8d0aafade86341b2bedb77c8fe62631fa4102e79cf4a7c39b8d"} Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.384654 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.401821 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.417989 4708 scope.go:117] "RemoveContainer" containerID="d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432575 4708 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/eb2fe191-cb57-46a6-9797-c9890640ff74-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432646 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") on node \"crc\" " Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432659 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rht9n\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-kube-api-access-rht9n\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432707 4708 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/eb2fe191-cb57-46a6-9797-c9890640ff74-pod-info\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432715 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432723 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432732 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.432740 4708 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-plugins-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.437004 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.454731 4708 scope.go:117] "RemoveContainer" containerID="ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.479800 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-config-data" (OuterVolumeSpecName: "config-data") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.488794 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.489999 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="607cf703-5051-4836-92bd-657dbab39bd4" containerName="cloudkitty-proc" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490019 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="607cf703-5051-4836-92bd-657dbab39bd4" containerName="cloudkitty-proc" Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.490043 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerName="setup-container" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490050 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerName="setup-container" Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.490064 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490071 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api" Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.490122 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerName="rabbitmq" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490130 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerName="rabbitmq" Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.490143 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api-log" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490150 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api-log" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490568 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api-log" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490606 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="607cf703-5051-4836-92bd-657dbab39bd4" containerName="cloudkitty-proc" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490623 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d761a40-6a7c-4691-a079-919d74122b18" containerName="cloudkitty-api" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.490638 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" containerName="rabbitmq" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.542981 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.546245 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.554790 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-scripts" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.556014 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-client-internal" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.556163 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-proc-config-data" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.556278 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-cloudkitty-dockercfg-2sp9f" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.556445 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-config-data" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.546110 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.556696 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.581246 4708 scope.go:117] "RemoveContainer" containerID="d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd" Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.583232 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd\": container with ID starting with d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd not found: ID does not exist" containerID="d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.583294 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd"} err="failed to get container status \"d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd\": rpc error: code = NotFound desc = could not find container \"d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd\": container with ID starting with d70b0921760a7a848190d0fade08cdb551a09315fa96f6b5a405260ee0c01cbd not found: ID does not exist" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.583321 4708 scope.go:117] "RemoveContainer" containerID="ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939" Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.583673 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939\": container with ID starting with ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939 not found: ID does not exist" containerID="ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.583698 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939"} err="failed to get container status \"ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939\": rpc error: code = NotFound desc = could not find container \"ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939\": container with ID starting with ea6efe329dc3900ef121cd51e3a92aff2c13514c06bcc2dca88ecfedec053939 not found: ID does not exist" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.583712 4708 scope.go:117] "RemoveContainer" containerID="33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.590409 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-server-conf" (OuterVolumeSpecName: "server-conf") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.593042 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.620122 4708 scope.go:117] "RemoveContainer" containerID="88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.620645 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.622823 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.624766 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-internal-svc" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.625380 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cloudkitty-public-svc" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.626076 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-api-config-data" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.647795 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/caa871a6-96e7-4f11-8769-0fc2464b8f71-certs\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.647928 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.647977 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6f56\" (UniqueName: \"kubernetes.io/projected/caa871a6-96e7-4f11-8769-0fc2464b8f71-kube-api-access-g6f56\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.647998 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.648023 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-scripts\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.648039 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-config-data\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.648094 4708 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/eb2fe191-cb57-46a6-9797-c9890640ff74-server-conf\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.649315 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.654288 4708 scope.go:117] "RemoveContainer" containerID="33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6" Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.654757 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6\": container with ID starting with 33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6 not found: ID does not exist" containerID="33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.654797 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6"} err="failed to get container status \"33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6\": rpc error: code = NotFound desc = could not find container \"33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6\": container with ID starting with 33b8361e5bf13f90d6c7d3ca6ff9b4201c091d33659bca65cb35685b6ebfa0d6 not found: ID does not exist" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.654816 4708 scope.go:117] "RemoveContainer" containerID="88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.654876 4708 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Feb 27 17:20:27 crc kubenswrapper[4708]: E0227 17:20:27.655073 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d\": container with ID starting with 88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d not found: ID does not exist" containerID="88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.655098 4708 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-b24d01da-b002-4c89-a426-a8dd80e44135" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135") on node "crc" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.655139 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d"} err="failed to get container status \"88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d\": rpc error: code = NotFound desc = could not find container \"88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d\": container with ID starting with 88b5fc6bee7f8bed88317544d57f702857c891028d6f5980d0c0cb416af5e36d not found: ID does not exist" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.660762 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "eb2fe191-cb57-46a6-9797-c9890640ff74" (UID: "eb2fe191-cb57-46a6-9797-c9890640ff74"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.703890 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.729899 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.751947 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/74bd4940-0cc6-4cc2-a593-60b6656899cb-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752041 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752086 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752109 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6f56\" (UniqueName: \"kubernetes.io/projected/caa871a6-96e7-4f11-8769-0fc2464b8f71-kube-api-access-g6f56\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752128 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752153 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-scripts\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752248 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-config-data\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752313 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752340 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-scripts\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752393 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q77x\" (UniqueName: \"kubernetes.io/projected/74bd4940-0cc6-4cc2-a593-60b6656899cb-kube-api-access-5q77x\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752500 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752565 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/caa871a6-96e7-4f11-8769-0fc2464b8f71-certs\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.752778 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74bd4940-0cc6-4cc2-a593-60b6656899cb-logs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.753327 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.753365 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-config-data\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.753530 4708 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/eb2fe191-cb57-46a6-9797-c9890640ff74-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.753545 4708 reconciler_common.go:293] "Volume detached for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.758452 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-config-data-custom\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.759432 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-config-data\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.761419 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/caa871a6-96e7-4f11-8769-0fc2464b8f71-certs\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.762331 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-scripts\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.767075 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/caa871a6-96e7-4f11-8769-0fc2464b8f71-combined-ca-bundle\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.771451 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6f56\" (UniqueName: \"kubernetes.io/projected/caa871a6-96e7-4f11-8769-0fc2464b8f71-kube-api-access-g6f56\") pod \"cloudkitty-proc-0\" (UID: \"caa871a6-96e7-4f11-8769-0fc2464b8f71\") " pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.773735 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.775389 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.778066 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.778313 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.778891 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.779005 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.779107 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.779776 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.780087 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-9zx8k" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.800592 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856331 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856375 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-scripts\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856403 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q77x\" (UniqueName: \"kubernetes.io/projected/74bd4940-0cc6-4cc2-a593-60b6656899cb-kube-api-access-5q77x\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856430 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856485 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74bd4940-0cc6-4cc2-a593-60b6656899cb-logs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856502 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856527 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-config-data\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856553 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/projected/74bd4940-0cc6-4cc2-a593-60b6656899cb-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.856641 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.857943 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/74bd4940-0cc6-4cc2-a593-60b6656899cb-logs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.859689 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-public-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.860152 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-config-data-custom\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.860308 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-internal-tls-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.860621 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/projected/74bd4940-0cc6-4cc2-a593-60b6656899cb-certs\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.862650 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-config-data\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.861841 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-combined-ca-bundle\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.863240 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/74bd4940-0cc6-4cc2-a593-60b6656899cb-scripts\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.883392 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q77x\" (UniqueName: \"kubernetes.io/projected/74bd4940-0cc6-4cc2-a593-60b6656899cb-kube-api-access-5q77x\") pod \"cloudkitty-api-0\" (UID: \"74bd4940-0cc6-4cc2-a593-60b6656899cb\") " pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.893636 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-proc-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.956699 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cloudkitty-api-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958103 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958129 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958149 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958173 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958198 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfrv\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-kube-api-access-hsfrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958306 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958373 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958522 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.958953 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.959066 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:27 crc kubenswrapper[4708]: I0227 17:20:27.959213 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062093 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062358 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsfrv\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-kube-api-access-hsfrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062397 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062426 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062460 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062551 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062576 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062603 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062634 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062652 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062674 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.062883 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.063582 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.063597 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.064489 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.064588 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.067120 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.067821 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.067931 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.070308 4708 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.070333 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/9f6ff909c36baed36fbb0de76c440cc5ed218f0a068c651800017aff83661890/globalmount\"" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.074234 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.080999 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsfrv\" (UniqueName: \"kubernetes.io/projected/7ac4a3d3-0b3a-4fc5-8f98-806ca5810475-kube-api-access-hsfrv\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.129446 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-b24d01da-b002-4c89-a426-a8dd80e44135\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b24d01da-b002-4c89-a426-a8dd80e44135\") pod \"rabbitmq-cell1-server-0\" (UID: \"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475\") " pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.238807 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d761a40-6a7c-4691-a079-919d74122b18" path="/var/lib/kubelet/pods/0d761a40-6a7c-4691-a079-919d74122b18/volumes" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.239802 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="607cf703-5051-4836-92bd-657dbab39bd4" path="/var/lib/kubelet/pods/607cf703-5051-4836-92bd-657dbab39bd4/volumes" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.240523 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb2fe191-cb57-46a6-9797-c9890640ff74" path="/var/lib/kubelet/pods/eb2fe191-cb57-46a6-9797-c9890640ff74/volumes" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.344235 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.415343 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"c1ce78ce-446b-4a42-bd4f-59fe2264e7c2","Type":"ContainerStarted","Data":"5b30efe86013d257af5d8d7083537b7a0a6e9cbe82c21352c30d3108bd0e9d4e"} Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.415504 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.445276 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.412174594 podStartE2EDuration="9.445256584s" podCreationTimestamp="2026-02-27 17:20:19 +0000 UTC" firstStartedPulling="2026-02-27 17:20:20.082714962 +0000 UTC m=+1618.598512539" lastFinishedPulling="2026-02-27 17:20:27.115796942 +0000 UTC m=+1625.631594529" observedRunningTime="2026-02-27 17:20:28.437317791 +0000 UTC m=+1626.953115378" watchObservedRunningTime="2026-02-27 17:20:28.445256584 +0000 UTC m=+1626.961054171" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.476803 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-proc-0"] Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.613094 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cloudkitty-api-0"] Feb 27 17:20:28 crc kubenswrapper[4708]: W0227 17:20:28.613862 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod74bd4940_0cc6_4cc2_a593_60b6656899cb.slice/crio-2e8492d2e0d58091a231addef9f04c6a9dff2fe1bf6e66e4b1975128787dcbc8 WatchSource:0}: Error finding container 2e8492d2e0d58091a231addef9f04c6a9dff2fe1bf6e66e4b1975128787dcbc8: Status 404 returned error can't find the container with id 2e8492d2e0d58091a231addef9f04c6a9dff2fe1bf6e66e4b1975128787dcbc8 Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.860103 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-qv2xx"] Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.862517 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.866822 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.882028 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-qv2xx"] Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.998015 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.998335 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.998412 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-config\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.999068 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8hbt\" (UniqueName: \"kubernetes.io/projected/2d55b898-0eab-4666-acca-9711909e4dcf-kube-api-access-q8hbt\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.999257 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.999367 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:28 crc kubenswrapper[4708]: I0227 17:20:28.999493 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.026763 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.101518 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.101589 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.101655 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-config\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.101698 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8hbt\" (UniqueName: \"kubernetes.io/projected/2d55b898-0eab-4666-acca-9711909e4dcf-kube-api-access-q8hbt\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.101770 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.101807 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.101917 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.102548 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-nb\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.102627 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-config\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.102718 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-swift-storage-0\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.102790 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-openstack-edpm-ipam\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.103112 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-sb\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.103495 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-svc\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.116692 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8hbt\" (UniqueName: \"kubernetes.io/projected/2d55b898-0eab-4666-acca-9711909e4dcf-kube-api-access-q8hbt\") pod \"dnsmasq-dns-dbb88bf8c-qv2xx\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.180433 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.392835 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.470816 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"caa871a6-96e7-4f11-8769-0fc2464b8f71","Type":"ContainerStarted","Data":"2d1099f6742410f6bf3c3bdd4cfc9a67b532289f2cc892a072a33028bc0094d0"} Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.470887 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-proc-0" event={"ID":"caa871a6-96e7-4f11-8769-0fc2464b8f71","Type":"ContainerStarted","Data":"514541e68931342166a63e13c716ac69937f1d0ef7f85fa945e5088ec4a927a6"} Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.481695 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475","Type":"ContainerStarted","Data":"f9e1dbd0342505a6c21885fd877af64375d5a8cb3ccca6494fdab1aa11bfd01a"} Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.499267 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.509355 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-proc-0" podStartSLOduration=2.354059227 podStartE2EDuration="2.50933055s" podCreationTimestamp="2026-02-27 17:20:27 +0000 UTC" firstStartedPulling="2026-02-27 17:20:28.467952101 +0000 UTC m=+1626.983749688" lastFinishedPulling="2026-02-27 17:20:28.623223424 +0000 UTC m=+1627.139021011" observedRunningTime="2026-02-27 17:20:29.489554184 +0000 UTC m=+1628.005351771" watchObservedRunningTime="2026-02-27 17:20:29.50933055 +0000 UTC m=+1628.025128127" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.511636 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"74bd4940-0cc6-4cc2-a593-60b6656899cb","Type":"ContainerStarted","Data":"15dbf1372a13fdd00f7ad95e495e68e1ccd0c6397fa11b7ac974089b1f54790d"} Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.511679 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"74bd4940-0cc6-4cc2-a593-60b6656899cb","Type":"ContainerStarted","Data":"b9553f5be1eb905f1d27ee4b78bd12c030b4e96f71183e059585e873a0b22719"} Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.511688 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cloudkitty-api-0" event={"ID":"74bd4940-0cc6-4cc2-a593-60b6656899cb","Type":"ContainerStarted","Data":"2e8492d2e0d58091a231addef9f04c6a9dff2fe1bf6e66e4b1975128787dcbc8"} Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.512366 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cloudkitty-api-0" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.520933 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"866e4edf-2f8a-4c4b-9caf-54ad03011231","Type":"ContainerStarted","Data":"a9962c1a56f309ba494497fb07edad9ca12120c4e0eaac29933c36d62c9eae37"} Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.555814 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cloudkitty-api-0" podStartSLOduration=2.555797456 podStartE2EDuration="2.555797456s" podCreationTimestamp="2026-02-27 17:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:20:29.547226455 +0000 UTC m=+1628.063024042" watchObservedRunningTime="2026-02-27 17:20:29.555797456 +0000 UTC m=+1628.071595043" Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.635032 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdvx5"] Feb 27 17:20:29 crc kubenswrapper[4708]: I0227 17:20:29.773887 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-qv2xx"] Feb 27 17:20:30 crc kubenswrapper[4708]: I0227 17:20:30.530759 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cdvx5" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="registry-server" containerID="cri-o://9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e" gracePeriod=2 Feb 27 17:20:30 crc kubenswrapper[4708]: I0227 17:20:30.532126 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" event={"ID":"2d55b898-0eab-4666-acca-9711909e4dcf","Type":"ContainerStarted","Data":"dd557253ec4f9f29bfb1552ba1dcd1c376aaa8ffa92d7c367b79ed9f009eb1fb"} Feb 27 17:20:31 crc kubenswrapper[4708]: E0227 17:20:31.170718 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d55b898_0eab_4666_acca_9711909e4dcf.slice/crio-f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d55b898_0eab_4666_acca_9711909e4dcf.slice/crio-conmon-f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd.scope\": RecentStats: unable to find data in memory cache]" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.477677 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.545525 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475","Type":"ContainerStarted","Data":"0deb181a3e7f1f01b06d01da02af9c8df1b1896a2c74f529570ed746f55a4e5c"} Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.556385 4708 generic.go:334] "Generic (PLEG): container finished" podID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerID="9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e" exitCode=0 Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.556453 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdvx5" event={"ID":"3856bd24-a61f-4c56-bfe9-5734964010fc","Type":"ContainerDied","Data":"9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e"} Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.556482 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cdvx5" event={"ID":"3856bd24-a61f-4c56-bfe9-5734964010fc","Type":"ContainerDied","Data":"ee85f79cadd6d17eac940346094aa8733f0ddd7542bfd30c7be911aec655720d"} Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.556498 4708 scope.go:117] "RemoveContainer" containerID="9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.556632 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cdvx5" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.563173 4708 generic.go:334] "Generic (PLEG): container finished" podID="2d55b898-0eab-4666-acca-9711909e4dcf" containerID="f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd" exitCode=0 Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.563218 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" event={"ID":"2d55b898-0eab-4666-acca-9711909e4dcf","Type":"ContainerDied","Data":"f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd"} Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.566584 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-catalog-content\") pod \"3856bd24-a61f-4c56-bfe9-5734964010fc\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.566635 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-utilities\") pod \"3856bd24-a61f-4c56-bfe9-5734964010fc\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.566667 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjhmn\" (UniqueName: \"kubernetes.io/projected/3856bd24-a61f-4c56-bfe9-5734964010fc-kube-api-access-sjhmn\") pod \"3856bd24-a61f-4c56-bfe9-5734964010fc\" (UID: \"3856bd24-a61f-4c56-bfe9-5734964010fc\") " Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.571040 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-utilities" (OuterVolumeSpecName: "utilities") pod "3856bd24-a61f-4c56-bfe9-5734964010fc" (UID: "3856bd24-a61f-4c56-bfe9-5734964010fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.574508 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3856bd24-a61f-4c56-bfe9-5734964010fc-kube-api-access-sjhmn" (OuterVolumeSpecName: "kube-api-access-sjhmn") pod "3856bd24-a61f-4c56-bfe9-5734964010fc" (UID: "3856bd24-a61f-4c56-bfe9-5734964010fc"). InnerVolumeSpecName "kube-api-access-sjhmn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.589479 4708 scope.go:117] "RemoveContainer" containerID="90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.671542 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.671583 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjhmn\" (UniqueName: \"kubernetes.io/projected/3856bd24-a61f-4c56-bfe9-5734964010fc-kube-api-access-sjhmn\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.688055 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3856bd24-a61f-4c56-bfe9-5734964010fc" (UID: "3856bd24-a61f-4c56-bfe9-5734964010fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.775915 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3856bd24-a61f-4c56-bfe9-5734964010fc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.808875 4708 scope.go:117] "RemoveContainer" containerID="6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.841660 4708 scope.go:117] "RemoveContainer" containerID="9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e" Feb 27 17:20:31 crc kubenswrapper[4708]: E0227 17:20:31.842249 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e\": container with ID starting with 9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e not found: ID does not exist" containerID="9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.842285 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e"} err="failed to get container status \"9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e\": rpc error: code = NotFound desc = could not find container \"9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e\": container with ID starting with 9d03515aa0b81d2e00065ab32e7cdcf406578e600d56946b5383e68067edbd1e not found: ID does not exist" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.842315 4708 scope.go:117] "RemoveContainer" containerID="90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2" Feb 27 17:20:31 crc kubenswrapper[4708]: E0227 17:20:31.842598 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2\": container with ID starting with 90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2 not found: ID does not exist" containerID="90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.842628 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2"} err="failed to get container status \"90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2\": rpc error: code = NotFound desc = could not find container \"90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2\": container with ID starting with 90c8d5727b2e2ca449d509b8be27e98c30c5c64bc197a47d7ed594d699b5fdd2 not found: ID does not exist" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.842651 4708 scope.go:117] "RemoveContainer" containerID="6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6" Feb 27 17:20:31 crc kubenswrapper[4708]: E0227 17:20:31.842966 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6\": container with ID starting with 6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6 not found: ID does not exist" containerID="6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.843007 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6"} err="failed to get container status \"6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6\": rpc error: code = NotFound desc = could not find container \"6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6\": container with ID starting with 6c6605280f30844228330065fcc327d913e061cd9d9f0c149e28bed0ab820ec6 not found: ID does not exist" Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.924673 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cdvx5"] Feb 27 17:20:31 crc kubenswrapper[4708]: I0227 17:20:31.942079 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cdvx5"] Feb 27 17:20:32 crc kubenswrapper[4708]: I0227 17:20:32.241551 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" path="/var/lib/kubelet/pods/3856bd24-a61f-4c56-bfe9-5734964010fc/volumes" Feb 27 17:20:32 crc kubenswrapper[4708]: I0227 17:20:32.578360 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" event={"ID":"2d55b898-0eab-4666-acca-9711909e4dcf","Type":"ContainerStarted","Data":"b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e"} Feb 27 17:20:32 crc kubenswrapper[4708]: I0227 17:20:32.578671 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:32 crc kubenswrapper[4708]: I0227 17:20:32.602696 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" podStartSLOduration=4.60267419 podStartE2EDuration="4.60267419s" podCreationTimestamp="2026-02-27 17:20:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:20:32.592649268 +0000 UTC m=+1631.108446855" watchObservedRunningTime="2026-02-27 17:20:32.60267419 +0000 UTC m=+1631.118471787" Feb 27 17:20:35 crc kubenswrapper[4708]: I0227 17:20:35.229049 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:20:35 crc kubenswrapper[4708]: E0227 17:20:35.229662 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.776593 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-tq8mz"] Feb 27 17:20:38 crc kubenswrapper[4708]: E0227 17:20:38.777347 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="extract-utilities" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.777366 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="extract-utilities" Feb 27 17:20:38 crc kubenswrapper[4708]: E0227 17:20:38.777388 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="registry-server" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.777396 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="registry-server" Feb 27 17:20:38 crc kubenswrapper[4708]: E0227 17:20:38.777444 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="extract-content" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.777453 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="extract-content" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.777714 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3856bd24-a61f-4c56-bfe9-5734964010fc" containerName="registry-server" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.779758 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.804077 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq8mz"] Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.828092 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927d4daf-45f7-48a8-9e25-a47aae1be192-catalog-content\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.828408 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927d4daf-45f7-48a8-9e25-a47aae1be192-utilities\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.828663 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gfjz\" (UniqueName: \"kubernetes.io/projected/927d4daf-45f7-48a8-9e25-a47aae1be192-kube-api-access-6gfjz\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.930963 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927d4daf-45f7-48a8-9e25-a47aae1be192-catalog-content\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.931071 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927d4daf-45f7-48a8-9e25-a47aae1be192-utilities\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.931157 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6gfjz\" (UniqueName: \"kubernetes.io/projected/927d4daf-45f7-48a8-9e25-a47aae1be192-kube-api-access-6gfjz\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.931566 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/927d4daf-45f7-48a8-9e25-a47aae1be192-catalog-content\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.931638 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/927d4daf-45f7-48a8-9e25-a47aae1be192-utilities\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:38 crc kubenswrapper[4708]: I0227 17:20:38.950981 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gfjz\" (UniqueName: \"kubernetes.io/projected/927d4daf-45f7-48a8-9e25-a47aae1be192-kube-api-access-6gfjz\") pod \"certified-operators-tq8mz\" (UID: \"927d4daf-45f7-48a8-9e25-a47aae1be192\") " pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.100945 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.182155 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.268597 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-cvtwx"] Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.269055 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" podUID="52953be0-5d65-4612-999f-0c6740c4909b" containerName="dnsmasq-dns" containerID="cri-o://6a6b4f087880920a43b080acf9aca6912b7e056558aac6b19bb7deb3e9206bf5" gracePeriod=10 Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.429208 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-csc9f"] Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.430960 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.453501 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kntw\" (UniqueName: \"kubernetes.io/projected/c1565c10-ac46-4e06-aaef-7eafc155b4cd-kube-api-access-7kntw\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.453544 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-dns-svc\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.453586 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.453619 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.453668 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-config\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.453686 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.453750 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.471189 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-csc9f"] Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.556636 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kntw\" (UniqueName: \"kubernetes.io/projected/c1565c10-ac46-4e06-aaef-7eafc155b4cd-kube-api-access-7kntw\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.556683 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-dns-svc\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.556727 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.556758 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.556809 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-config\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.556827 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.556906 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.557698 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-ovsdbserver-nb\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.560073 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-dns-svc\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.560069 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-ovsdbserver-sb\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.560509 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-config\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.561214 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-dns-swift-storage-0\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.562104 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/c1565c10-ac46-4e06-aaef-7eafc155b4cd-openstack-edpm-ipam\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.582080 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kntw\" (UniqueName: \"kubernetes.io/projected/c1565c10-ac46-4e06-aaef-7eafc155b4cd-kube-api-access-7kntw\") pod \"dnsmasq-dns-85f64749dc-csc9f\" (UID: \"c1565c10-ac46-4e06-aaef-7eafc155b4cd\") " pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.666375 4708 generic.go:334] "Generic (PLEG): container finished" podID="52953be0-5d65-4612-999f-0c6740c4909b" containerID="6a6b4f087880920a43b080acf9aca6912b7e056558aac6b19bb7deb3e9206bf5" exitCode=0 Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.666439 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" event={"ID":"52953be0-5d65-4612-999f-0c6740c4909b","Type":"ContainerDied","Data":"6a6b4f087880920a43b080acf9aca6912b7e056558aac6b19bb7deb3e9206bf5"} Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.689149 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq8mz"] Feb 27 17:20:39 crc kubenswrapper[4708]: I0227 17:20:39.775230 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.036467 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.173551 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-nb\") pod \"52953be0-5d65-4612-999f-0c6740c4909b\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.173647 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-config\") pod \"52953be0-5d65-4612-999f-0c6740c4909b\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.173727 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-swift-storage-0\") pod \"52953be0-5d65-4612-999f-0c6740c4909b\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.173765 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqzxq\" (UniqueName: \"kubernetes.io/projected/52953be0-5d65-4612-999f-0c6740c4909b-kube-api-access-fqzxq\") pod \"52953be0-5d65-4612-999f-0c6740c4909b\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.173813 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-sb\") pod \"52953be0-5d65-4612-999f-0c6740c4909b\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.173858 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-svc\") pod \"52953be0-5d65-4612-999f-0c6740c4909b\" (UID: \"52953be0-5d65-4612-999f-0c6740c4909b\") " Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.184774 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/52953be0-5d65-4612-999f-0c6740c4909b-kube-api-access-fqzxq" (OuterVolumeSpecName: "kube-api-access-fqzxq") pod "52953be0-5d65-4612-999f-0c6740c4909b" (UID: "52953be0-5d65-4612-999f-0c6740c4909b"). InnerVolumeSpecName "kube-api-access-fqzxq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.243710 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "52953be0-5d65-4612-999f-0c6740c4909b" (UID: "52953be0-5d65-4612-999f-0c6740c4909b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.264254 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-config" (OuterVolumeSpecName: "config") pod "52953be0-5d65-4612-999f-0c6740c4909b" (UID: "52953be0-5d65-4612-999f-0c6740c4909b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.274340 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "52953be0-5d65-4612-999f-0c6740c4909b" (UID: "52953be0-5d65-4612-999f-0c6740c4909b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.276796 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.276945 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.277023 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqzxq\" (UniqueName: \"kubernetes.io/projected/52953be0-5d65-4612-999f-0c6740c4909b-kube-api-access-fqzxq\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.277083 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.292214 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "52953be0-5d65-4612-999f-0c6740c4909b" (UID: "52953be0-5d65-4612-999f-0c6740c4909b"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.349292 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "52953be0-5d65-4612-999f-0c6740c4909b" (UID: "52953be0-5d65-4612-999f-0c6740c4909b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.379420 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.379447 4708 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/52953be0-5d65-4612-999f-0c6740c4909b-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.385700 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-85f64749dc-csc9f"] Feb 27 17:20:40 crc kubenswrapper[4708]: W0227 17:20:40.386104 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc1565c10_ac46_4e06_aaef_7eafc155b4cd.slice/crio-ba6370d80a2a885b4e7b3d9657d161a6b36964d7abafba4c37d62269e4ffc936 WatchSource:0}: Error finding container ba6370d80a2a885b4e7b3d9657d161a6b36964d7abafba4c37d62269e4ffc936: Status 404 returned error can't find the container with id ba6370d80a2a885b4e7b3d9657d161a6b36964d7abafba4c37d62269e4ffc936 Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.676276 4708 generic.go:334] "Generic (PLEG): container finished" podID="927d4daf-45f7-48a8-9e25-a47aae1be192" containerID="7a1db5e5e3030381387f7f9e5be363ff7a890e719d164dec85759cb2fb5b8c65" exitCode=0 Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.676370 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8mz" event={"ID":"927d4daf-45f7-48a8-9e25-a47aae1be192","Type":"ContainerDied","Data":"7a1db5e5e3030381387f7f9e5be363ff7a890e719d164dec85759cb2fb5b8c65"} Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.676414 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8mz" event={"ID":"927d4daf-45f7-48a8-9e25-a47aae1be192","Type":"ContainerStarted","Data":"f25e010bc36f68464eee83c0f92486806a1abf705735e1b95411b93cfc384506"} Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.679358 4708 generic.go:334] "Generic (PLEG): container finished" podID="c1565c10-ac46-4e06-aaef-7eafc155b4cd" containerID="6855c18ec950daad3031a65405d6e4d7b56ffaefe2f66f9192046c9f60883eb6" exitCode=0 Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.679506 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" event={"ID":"c1565c10-ac46-4e06-aaef-7eafc155b4cd","Type":"ContainerDied","Data":"6855c18ec950daad3031a65405d6e4d7b56ffaefe2f66f9192046c9f60883eb6"} Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.679558 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" event={"ID":"c1565c10-ac46-4e06-aaef-7eafc155b4cd","Type":"ContainerStarted","Data":"ba6370d80a2a885b4e7b3d9657d161a6b36964d7abafba4c37d62269e4ffc936"} Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.682842 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" event={"ID":"52953be0-5d65-4612-999f-0c6740c4909b","Type":"ContainerDied","Data":"1b557e5aa8d8d09c3d1586aa4845cef96026dc8203342f41b786ed69590aa3f4"} Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.682923 4708 scope.go:117] "RemoveContainer" containerID="6a6b4f087880920a43b080acf9aca6912b7e056558aac6b19bb7deb3e9206bf5" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.683074 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5fd9b586ff-cvtwx" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.715032 4708 scope.go:117] "RemoveContainer" containerID="9c310302045200ba2d4bbb4242ab6731de7f7320aa2d17b62909fbff28e0c472" Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.751309 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-cvtwx"] Feb 27 17:20:40 crc kubenswrapper[4708]: I0227 17:20:40.759663 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5fd9b586ff-cvtwx"] Feb 27 17:20:41 crc kubenswrapper[4708]: I0227 17:20:41.702359 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" event={"ID":"c1565c10-ac46-4e06-aaef-7eafc155b4cd","Type":"ContainerStarted","Data":"ac6f03c3f8f6410b76c380b0122fe2a0aaf7598b0e8a3773ff6c2c484e281121"} Feb 27 17:20:41 crc kubenswrapper[4708]: I0227 17:20:41.702701 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:41 crc kubenswrapper[4708]: I0227 17:20:41.732748 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" podStartSLOduration=2.732720156 podStartE2EDuration="2.732720156s" podCreationTimestamp="2026-02-27 17:20:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:20:41.721514951 +0000 UTC m=+1640.237312558" watchObservedRunningTime="2026-02-27 17:20:41.732720156 +0000 UTC m=+1640.248517753" Feb 27 17:20:42 crc kubenswrapper[4708]: I0227 17:20:42.242218 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="52953be0-5d65-4612-999f-0c6740c4909b" path="/var/lib/kubelet/pods/52953be0-5d65-4612-999f-0c6740c4909b/volumes" Feb 27 17:20:47 crc kubenswrapper[4708]: I0227 17:20:47.784702 4708 generic.go:334] "Generic (PLEG): container finished" podID="927d4daf-45f7-48a8-9e25-a47aae1be192" containerID="73598b00a6ca7ae3fe53068d632db342bae521ec77c5b852c8c8b043808b3bb1" exitCode=0 Feb 27 17:20:47 crc kubenswrapper[4708]: I0227 17:20:47.784804 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8mz" event={"ID":"927d4daf-45f7-48a8-9e25-a47aae1be192","Type":"ContainerDied","Data":"73598b00a6ca7ae3fe53068d632db342bae521ec77c5b852c8c8b043808b3bb1"} Feb 27 17:20:48 crc kubenswrapper[4708]: I0227 17:20:48.799227 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-tq8mz" event={"ID":"927d4daf-45f7-48a8-9e25-a47aae1be192","Type":"ContainerStarted","Data":"52885af39a33deb8dca5e97ea7f1336a7e681f69a8f1cebd1dba7ed97126a6aa"} Feb 27 17:20:48 crc kubenswrapper[4708]: I0227 17:20:48.835155 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-tq8mz" podStartSLOduration=3.276068777 podStartE2EDuration="10.835127286s" podCreationTimestamp="2026-02-27 17:20:38 +0000 UTC" firstStartedPulling="2026-02-27 17:20:40.679151876 +0000 UTC m=+1639.194949483" lastFinishedPulling="2026-02-27 17:20:48.238210375 +0000 UTC m=+1646.754007992" observedRunningTime="2026-02-27 17:20:48.824291191 +0000 UTC m=+1647.340088808" watchObservedRunningTime="2026-02-27 17:20:48.835127286 +0000 UTC m=+1647.350924913" Feb 27 17:20:49 crc kubenswrapper[4708]: I0227 17:20:49.101919 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:49 crc kubenswrapper[4708]: I0227 17:20:49.101984 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:49 crc kubenswrapper[4708]: I0227 17:20:49.228242 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:20:49 crc kubenswrapper[4708]: E0227 17:20:49.228516 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:20:49 crc kubenswrapper[4708]: I0227 17:20:49.568735 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Feb 27 17:20:49 crc kubenswrapper[4708]: I0227 17:20:49.777353 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-85f64749dc-csc9f" Feb 27 17:20:49 crc kubenswrapper[4708]: I0227 17:20:49.851399 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-qv2xx"] Feb 27 17:20:49 crc kubenswrapper[4708]: I0227 17:20:49.851624 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" podUID="2d55b898-0eab-4666-acca-9711909e4dcf" containerName="dnsmasq-dns" containerID="cri-o://b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e" gracePeriod=10 Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.213409 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-tq8mz" podUID="927d4daf-45f7-48a8-9e25-a47aae1be192" containerName="registry-server" probeResult="failure" output=< Feb 27 17:20:50 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:20:50 crc kubenswrapper[4708]: > Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.503387 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.608487 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-config\") pod \"2d55b898-0eab-4666-acca-9711909e4dcf\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.608774 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-openstack-edpm-ipam\") pod \"2d55b898-0eab-4666-acca-9711909e4dcf\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.609386 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-swift-storage-0\") pod \"2d55b898-0eab-4666-acca-9711909e4dcf\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.609459 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-sb\") pod \"2d55b898-0eab-4666-acca-9711909e4dcf\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.609492 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-svc\") pod \"2d55b898-0eab-4666-acca-9711909e4dcf\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.609536 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8hbt\" (UniqueName: \"kubernetes.io/projected/2d55b898-0eab-4666-acca-9711909e4dcf-kube-api-access-q8hbt\") pod \"2d55b898-0eab-4666-acca-9711909e4dcf\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.609652 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-nb\") pod \"2d55b898-0eab-4666-acca-9711909e4dcf\" (UID: \"2d55b898-0eab-4666-acca-9711909e4dcf\") " Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.642081 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d55b898-0eab-4666-acca-9711909e4dcf-kube-api-access-q8hbt" (OuterVolumeSpecName: "kube-api-access-q8hbt") pod "2d55b898-0eab-4666-acca-9711909e4dcf" (UID: "2d55b898-0eab-4666-acca-9711909e4dcf"). InnerVolumeSpecName "kube-api-access-q8hbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.686452 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2d55b898-0eab-4666-acca-9711909e4dcf" (UID: "2d55b898-0eab-4666-acca-9711909e4dcf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.697511 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "2d55b898-0eab-4666-acca-9711909e4dcf" (UID: "2d55b898-0eab-4666-acca-9711909e4dcf"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.699775 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2d55b898-0eab-4666-acca-9711909e4dcf" (UID: "2d55b898-0eab-4666-acca-9711909e4dcf"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.702380 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-config" (OuterVolumeSpecName: "config") pod "2d55b898-0eab-4666-acca-9711909e4dcf" (UID: "2d55b898-0eab-4666-acca-9711909e4dcf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.707049 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2d55b898-0eab-4666-acca-9711909e4dcf" (UID: "2d55b898-0eab-4666-acca-9711909e4dcf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.712116 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2d55b898-0eab-4666-acca-9711909e4dcf" (UID: "2d55b898-0eab-4666-acca-9711909e4dcf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.712282 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.712466 4708 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-config\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.712537 4708 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.712548 4708 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.712558 4708 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.712566 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8hbt\" (UniqueName: \"kubernetes.io/projected/2d55b898-0eab-4666-acca-9711909e4dcf-kube-api-access-q8hbt\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.813801 4708 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2d55b898-0eab-4666-acca-9711909e4dcf-dns-svc\") on node \"crc\" DevicePath \"\"" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.818641 4708 generic.go:334] "Generic (PLEG): container finished" podID="2d55b898-0eab-4666-acca-9711909e4dcf" containerID="b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e" exitCode=0 Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.819684 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.821907 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" event={"ID":"2d55b898-0eab-4666-acca-9711909e4dcf","Type":"ContainerDied","Data":"b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e"} Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.821967 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-dbb88bf8c-qv2xx" event={"ID":"2d55b898-0eab-4666-acca-9711909e4dcf","Type":"ContainerDied","Data":"dd557253ec4f9f29bfb1552ba1dcd1c376aaa8ffa92d7c367b79ed9f009eb1fb"} Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.821988 4708 scope.go:117] "RemoveContainer" containerID="b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.854974 4708 scope.go:117] "RemoveContainer" containerID="f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.868977 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-qv2xx"] Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.877275 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-dbb88bf8c-qv2xx"] Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.890884 4708 scope.go:117] "RemoveContainer" containerID="b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e" Feb 27 17:20:50 crc kubenswrapper[4708]: E0227 17:20:50.891262 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e\": container with ID starting with b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e not found: ID does not exist" containerID="b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.891304 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e"} err="failed to get container status \"b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e\": rpc error: code = NotFound desc = could not find container \"b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e\": container with ID starting with b386c202488e4cfdfa35f3802328331b8e94ab69bc74f3ddae7cb23b4e64bd7e not found: ID does not exist" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.891351 4708 scope.go:117] "RemoveContainer" containerID="f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd" Feb 27 17:20:50 crc kubenswrapper[4708]: E0227 17:20:50.891763 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd\": container with ID starting with f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd not found: ID does not exist" containerID="f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd" Feb 27 17:20:50 crc kubenswrapper[4708]: I0227 17:20:50.891809 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd"} err="failed to get container status \"f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd\": rpc error: code = NotFound desc = could not find container \"f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd\": container with ID starting with f11c11793b7de1a0aaac564408971c9a7248f86c2d64fa1f4343dadfa33145cd not found: ID does not exist" Feb 27 17:20:52 crc kubenswrapper[4708]: I0227 17:20:52.238271 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d55b898-0eab-4666-acca-9711909e4dcf" path="/var/lib/kubelet/pods/2d55b898-0eab-4666-acca-9711909e4dcf/volumes" Feb 27 17:20:56 crc kubenswrapper[4708]: I0227 17:20:56.790258 4708 scope.go:117] "RemoveContainer" containerID="464ab374952e9ea798847dd85f9ad750f5e3919a70afea2e2dfeee4d20ae9791" Feb 27 17:20:56 crc kubenswrapper[4708]: I0227 17:20:56.840267 4708 scope.go:117] "RemoveContainer" containerID="7ba36d4b083743d4413d8168aa6f629b8004e385f94b162439c2a26d6d87c5d8" Feb 27 17:20:59 crc kubenswrapper[4708]: I0227 17:20:59.161589 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:59 crc kubenswrapper[4708]: I0227 17:20:59.220488 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-tq8mz" Feb 27 17:20:59 crc kubenswrapper[4708]: I0227 17:20:59.303820 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-tq8mz"] Feb 27 17:20:59 crc kubenswrapper[4708]: I0227 17:20:59.400052 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8lns"] Feb 27 17:20:59 crc kubenswrapper[4708]: I0227 17:20:59.400644 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-x8lns" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="registry-server" containerID="cri-o://82f081b6e6613e60635ce63ec60cba6a71715baf9d3524c9d638b7d1aabae47b" gracePeriod=2 Feb 27 17:20:59 crc kubenswrapper[4708]: I0227 17:20:59.939680 4708 generic.go:334] "Generic (PLEG): container finished" podID="f25135e1-5701-4932-a01a-4e5f550181e6" containerID="82f081b6e6613e60635ce63ec60cba6a71715baf9d3524c9d638b7d1aabae47b" exitCode=0 Feb 27 17:20:59 crc kubenswrapper[4708]: I0227 17:20:59.939763 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lns" event={"ID":"f25135e1-5701-4932-a01a-4e5f550181e6","Type":"ContainerDied","Data":"82f081b6e6613e60635ce63ec60cba6a71715baf9d3524c9d638b7d1aabae47b"} Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.372886 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.454626 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-utilities\") pod \"f25135e1-5701-4932-a01a-4e5f550181e6\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.454752 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-catalog-content\") pod \"f25135e1-5701-4932-a01a-4e5f550181e6\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.455013 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sz4v2\" (UniqueName: \"kubernetes.io/projected/f25135e1-5701-4932-a01a-4e5f550181e6-kube-api-access-sz4v2\") pod \"f25135e1-5701-4932-a01a-4e5f550181e6\" (UID: \"f25135e1-5701-4932-a01a-4e5f550181e6\") " Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.461291 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-utilities" (OuterVolumeSpecName: "utilities") pod "f25135e1-5701-4932-a01a-4e5f550181e6" (UID: "f25135e1-5701-4932-a01a-4e5f550181e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.467219 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f25135e1-5701-4932-a01a-4e5f550181e6-kube-api-access-sz4v2" (OuterVolumeSpecName: "kube-api-access-sz4v2") pod "f25135e1-5701-4932-a01a-4e5f550181e6" (UID: "f25135e1-5701-4932-a01a-4e5f550181e6"). InnerVolumeSpecName "kube-api-access-sz4v2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.542946 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f25135e1-5701-4932-a01a-4e5f550181e6" (UID: "f25135e1-5701-4932-a01a-4e5f550181e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.557885 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sz4v2\" (UniqueName: \"kubernetes.io/projected/f25135e1-5701-4932-a01a-4e5f550181e6-kube-api-access-sz4v2\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.557917 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.557928 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f25135e1-5701-4932-a01a-4e5f550181e6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.950721 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-x8lns" event={"ID":"f25135e1-5701-4932-a01a-4e5f550181e6","Type":"ContainerDied","Data":"6a8cba260dc4ad8f2c3d1d9f5e39d8612b0788f23d94f138a0316de149ef8c47"} Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.950999 4708 scope.go:117] "RemoveContainer" containerID="82f081b6e6613e60635ce63ec60cba6a71715baf9d3524c9d638b7d1aabae47b" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.950768 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-x8lns" Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.952194 4708 generic.go:334] "Generic (PLEG): container finished" podID="866e4edf-2f8a-4c4b-9caf-54ad03011231" containerID="a9962c1a56f309ba494497fb07edad9ca12120c4e0eaac29933c36d62c9eae37" exitCode=0 Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.952261 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"866e4edf-2f8a-4c4b-9caf-54ad03011231","Type":"ContainerDied","Data":"a9962c1a56f309ba494497fb07edad9ca12120c4e0eaac29933c36d62c9eae37"} Feb 27 17:21:00 crc kubenswrapper[4708]: I0227 17:21:00.990399 4708 scope.go:117] "RemoveContainer" containerID="eb6463fcf9fc27141a1381f9f74eea999173162c6c571b3bd4f5b25b56d34941" Feb 27 17:21:01 crc kubenswrapper[4708]: I0227 17:21:01.086056 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-x8lns"] Feb 27 17:21:01 crc kubenswrapper[4708]: I0227 17:21:01.089928 4708 scope.go:117] "RemoveContainer" containerID="26c58fcd272fda410e18794b274a2d964b17785e108fb665a0e3b60f2281c070" Feb 27 17:21:01 crc kubenswrapper[4708]: I0227 17:21:01.094825 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-x8lns"] Feb 27 17:21:01 crc kubenswrapper[4708]: I0227 17:21:01.228920 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:21:01 crc kubenswrapper[4708]: E0227 17:21:01.229223 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:21:01 crc kubenswrapper[4708]: I0227 17:21:01.967752 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"866e4edf-2f8a-4c4b-9caf-54ad03011231","Type":"ContainerStarted","Data":"2007ce5903da0bbc358c0ac42f19a757b01c01761dc1b15238fec42ef5d17242"} Feb 27 17:21:01 crc kubenswrapper[4708]: I0227 17:21:01.968943 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.242111 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" path="/var/lib/kubelet/pods/f25135e1-5701-4932-a01a-4e5f550181e6/volumes" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.978381 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.97836037 podStartE2EDuration="37.97836037s" podCreationTimestamp="2026-02-27 17:20:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:21:01.997024668 +0000 UTC m=+1660.512822315" watchObservedRunningTime="2026-02-27 17:21:02.97836037 +0000 UTC m=+1661.494157957" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.984570 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj"] Feb 27 17:21:02 crc kubenswrapper[4708]: E0227 17:21:02.985277 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="registry-server" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.985368 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="registry-server" Feb 27 17:21:02 crc kubenswrapper[4708]: E0227 17:21:02.985482 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d55b898-0eab-4666-acca-9711909e4dcf" containerName="dnsmasq-dns" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.985563 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d55b898-0eab-4666-acca-9711909e4dcf" containerName="dnsmasq-dns" Feb 27 17:21:02 crc kubenswrapper[4708]: E0227 17:21:02.985652 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="extract-content" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.985721 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="extract-content" Feb 27 17:21:02 crc kubenswrapper[4708]: E0227 17:21:02.985795 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52953be0-5d65-4612-999f-0c6740c4909b" containerName="init" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.985882 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="52953be0-5d65-4612-999f-0c6740c4909b" containerName="init" Feb 27 17:21:02 crc kubenswrapper[4708]: E0227 17:21:02.985990 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d55b898-0eab-4666-acca-9711909e4dcf" containerName="init" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.986066 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d55b898-0eab-4666-acca-9711909e4dcf" containerName="init" Feb 27 17:21:02 crc kubenswrapper[4708]: E0227 17:21:02.986160 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="extract-utilities" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.986265 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="extract-utilities" Feb 27 17:21:02 crc kubenswrapper[4708]: E0227 17:21:02.986346 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="52953be0-5d65-4612-999f-0c6740c4909b" containerName="dnsmasq-dns" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.986415 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="52953be0-5d65-4612-999f-0c6740c4909b" containerName="dnsmasq-dns" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.986763 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d55b898-0eab-4666-acca-9711909e4dcf" containerName="dnsmasq-dns" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.986903 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f25135e1-5701-4932-a01a-4e5f550181e6" containerName="registry-server" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.987003 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="52953be0-5d65-4612-999f-0c6740c4909b" containerName="dnsmasq-dns" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.988005 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.992610 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.992873 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.992991 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:21:02 crc kubenswrapper[4708]: I0227 17:21:02.993859 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.001809 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj"] Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.002231 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.002373 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.002414 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qfq9\" (UniqueName: \"kubernetes.io/projected/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-kube-api-access-9qfq9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.002464 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.035617 4708 generic.go:334] "Generic (PLEG): container finished" podID="7ac4a3d3-0b3a-4fc5-8f98-806ca5810475" containerID="0deb181a3e7f1f01b06d01da02af9c8df1b1896a2c74f529570ed746f55a4e5c" exitCode=0 Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.037030 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475","Type":"ContainerDied","Data":"0deb181a3e7f1f01b06d01da02af9c8df1b1896a2c74f529570ed746f55a4e5c"} Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.104597 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.104658 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qfq9\" (UniqueName: \"kubernetes.io/projected/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-kube-api-access-9qfq9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.104723 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.104765 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.107777 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.110803 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-ssh-key-openstack-edpm-ipam\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.119220 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.128267 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qfq9\" (UniqueName: \"kubernetes.io/projected/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-kube-api-access-9qfq9\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.311372 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:03 crc kubenswrapper[4708]: I0227 17:21:03.932949 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj"] Feb 27 17:21:04 crc kubenswrapper[4708]: I0227 17:21:04.081365 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"7ac4a3d3-0b3a-4fc5-8f98-806ca5810475","Type":"ContainerStarted","Data":"50137b5775a2975095d090494a6c322028d75e6ae54811eb7abcee644094d82b"} Feb 27 17:21:04 crc kubenswrapper[4708]: I0227 17:21:04.081842 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:21:04 crc kubenswrapper[4708]: I0227 17:21:04.092237 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" event={"ID":"7bde186b-7de3-419b-b5fe-58d72f7d1a9e","Type":"ContainerStarted","Data":"8af9d2fb1aa50190e9d5ad042696954b60ae56ccf8ba0cd859b9dda575ade453"} Feb 27 17:21:04 crc kubenswrapper[4708]: I0227 17:21:04.124402 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=37.124376297 podStartE2EDuration="37.124376297s" podCreationTimestamp="2026-02-27 17:20:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 17:21:04.117133294 +0000 UTC m=+1662.632930881" watchObservedRunningTime="2026-02-27 17:21:04.124376297 +0000 UTC m=+1662.640173884" Feb 27 17:21:05 crc kubenswrapper[4708]: I0227 17:21:05.117524 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cloudkitty-api-0" Feb 27 17:21:16 crc kubenswrapper[4708]: I0227 17:21:16.228923 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:21:16 crc kubenswrapper[4708]: E0227 17:21:16.229750 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:21:16 crc kubenswrapper[4708]: I0227 17:21:16.543092 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Feb 27 17:21:17 crc kubenswrapper[4708]: I0227 17:21:17.226720 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" event={"ID":"7bde186b-7de3-419b-b5fe-58d72f7d1a9e","Type":"ContainerStarted","Data":"be26700082f0a1a4c9748b3395ae0e9fefa620dc6c4e1e88b7bb4ce491161cae"} Feb 27 17:21:17 crc kubenswrapper[4708]: I0227 17:21:17.247976 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" podStartSLOduration=2.277799199 podStartE2EDuration="15.247959607s" podCreationTimestamp="2026-02-27 17:21:02 +0000 UTC" firstStartedPulling="2026-02-27 17:21:03.923455803 +0000 UTC m=+1662.439253390" lastFinishedPulling="2026-02-27 17:21:16.893616211 +0000 UTC m=+1675.409413798" observedRunningTime="2026-02-27 17:21:17.241691901 +0000 UTC m=+1675.757489488" watchObservedRunningTime="2026-02-27 17:21:17.247959607 +0000 UTC m=+1675.763757194" Feb 27 17:21:18 crc kubenswrapper[4708]: I0227 17:21:18.347831 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Feb 27 17:21:27 crc kubenswrapper[4708]: I0227 17:21:27.228101 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:21:27 crc kubenswrapper[4708]: E0227 17:21:27.228931 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:21:28 crc kubenswrapper[4708]: I0227 17:21:28.380119 4708 generic.go:334] "Generic (PLEG): container finished" podID="7bde186b-7de3-419b-b5fe-58d72f7d1a9e" containerID="be26700082f0a1a4c9748b3395ae0e9fefa620dc6c4e1e88b7bb4ce491161cae" exitCode=0 Feb 27 17:21:28 crc kubenswrapper[4708]: I0227 17:21:28.380314 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" event={"ID":"7bde186b-7de3-419b-b5fe-58d72f7d1a9e","Type":"ContainerDied","Data":"be26700082f0a1a4c9748b3395ae0e9fefa620dc6c4e1e88b7bb4ce491161cae"} Feb 27 17:21:29 crc kubenswrapper[4708]: I0227 17:21:29.944572 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.086040 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-repo-setup-combined-ca-bundle\") pod \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.086118 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-ssh-key-openstack-edpm-ipam\") pod \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.086200 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-inventory\") pod \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.086430 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qfq9\" (UniqueName: \"kubernetes.io/projected/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-kube-api-access-9qfq9\") pod \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\" (UID: \"7bde186b-7de3-419b-b5fe-58d72f7d1a9e\") " Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.107669 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-kube-api-access-9qfq9" (OuterVolumeSpecName: "kube-api-access-9qfq9") pod "7bde186b-7de3-419b-b5fe-58d72f7d1a9e" (UID: "7bde186b-7de3-419b-b5fe-58d72f7d1a9e"). InnerVolumeSpecName "kube-api-access-9qfq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.107954 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7bde186b-7de3-419b-b5fe-58d72f7d1a9e" (UID: "7bde186b-7de3-419b-b5fe-58d72f7d1a9e"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.119262 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-inventory" (OuterVolumeSpecName: "inventory") pod "7bde186b-7de3-419b-b5fe-58d72f7d1a9e" (UID: "7bde186b-7de3-419b-b5fe-58d72f7d1a9e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.143176 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7bde186b-7de3-419b-b5fe-58d72f7d1a9e" (UID: "7bde186b-7de3-419b-b5fe-58d72f7d1a9e"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.188367 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qfq9\" (UniqueName: \"kubernetes.io/projected/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-kube-api-access-9qfq9\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.188394 4708 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.188403 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.188414 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7bde186b-7de3-419b-b5fe-58d72f7d1a9e-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.413241 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" event={"ID":"7bde186b-7de3-419b-b5fe-58d72f7d1a9e","Type":"ContainerDied","Data":"8af9d2fb1aa50190e9d5ad042696954b60ae56ccf8ba0cd859b9dda575ade453"} Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.413516 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8af9d2fb1aa50190e9d5ad042696954b60ae56ccf8ba0cd859b9dda575ade453" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.413329 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.496413 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz"] Feb 27 17:21:30 crc kubenswrapper[4708]: E0227 17:21:30.496831 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7bde186b-7de3-419b-b5fe-58d72f7d1a9e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.496864 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7bde186b-7de3-419b-b5fe-58d72f7d1a9e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.497076 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7bde186b-7de3-419b-b5fe-58d72f7d1a9e" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.497782 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.500432 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.500538 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.500680 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.500928 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.521793 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz"] Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.605189 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kjhcj\" (UniqueName: \"kubernetes.io/projected/366fdafa-6776-4ab6-82b3-be300efc15de-kube-api-access-kjhcj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.605380 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.605429 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.706933 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.706996 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.707052 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kjhcj\" (UniqueName: \"kubernetes.io/projected/366fdafa-6776-4ab6-82b3-be300efc15de-kube-api-access-kjhcj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.711369 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-ssh-key-openstack-edpm-ipam\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.712674 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-inventory\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.735526 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kjhcj\" (UniqueName: \"kubernetes.io/projected/366fdafa-6776-4ab6-82b3-be300efc15de-kube-api-access-kjhcj\") pod \"redhat-edpm-deployment-openstack-edpm-ipam-xpxdz\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:30 crc kubenswrapper[4708]: I0227 17:21:30.834706 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:31 crc kubenswrapper[4708]: I0227 17:21:31.409626 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz"] Feb 27 17:21:32 crc kubenswrapper[4708]: I0227 17:21:32.438821 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" event={"ID":"366fdafa-6776-4ab6-82b3-be300efc15de","Type":"ContainerStarted","Data":"dabf1df946c1f91c81b80d6ba9508d83bfcbef50f900f30de1ed9f0b11770f5f"} Feb 27 17:21:32 crc kubenswrapper[4708]: I0227 17:21:32.439475 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" event={"ID":"366fdafa-6776-4ab6-82b3-be300efc15de","Type":"ContainerStarted","Data":"a333cef243555b86e303f66f5f8febccb475687496b0a773bf3e71b5e01ea2c3"} Feb 27 17:21:32 crc kubenswrapper[4708]: I0227 17:21:32.468587 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" podStartSLOduration=2.014815574 podStartE2EDuration="2.468562142s" podCreationTimestamp="2026-02-27 17:21:30 +0000 UTC" firstStartedPulling="2026-02-27 17:21:31.417485032 +0000 UTC m=+1689.933282619" lastFinishedPulling="2026-02-27 17:21:31.8712316 +0000 UTC m=+1690.387029187" observedRunningTime="2026-02-27 17:21:32.466048672 +0000 UTC m=+1690.981846299" watchObservedRunningTime="2026-02-27 17:21:32.468562142 +0000 UTC m=+1690.984359769" Feb 27 17:21:35 crc kubenswrapper[4708]: I0227 17:21:35.481730 4708 generic.go:334] "Generic (PLEG): container finished" podID="366fdafa-6776-4ab6-82b3-be300efc15de" containerID="dabf1df946c1f91c81b80d6ba9508d83bfcbef50f900f30de1ed9f0b11770f5f" exitCode=0 Feb 27 17:21:35 crc kubenswrapper[4708]: I0227 17:21:35.486413 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" event={"ID":"366fdafa-6776-4ab6-82b3-be300efc15de","Type":"ContainerDied","Data":"dabf1df946c1f91c81b80d6ba9508d83bfcbef50f900f30de1ed9f0b11770f5f"} Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.113924 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.276384 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-inventory\") pod \"366fdafa-6776-4ab6-82b3-be300efc15de\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.276462 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kjhcj\" (UniqueName: \"kubernetes.io/projected/366fdafa-6776-4ab6-82b3-be300efc15de-kube-api-access-kjhcj\") pod \"366fdafa-6776-4ab6-82b3-be300efc15de\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.276529 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-ssh-key-openstack-edpm-ipam\") pod \"366fdafa-6776-4ab6-82b3-be300efc15de\" (UID: \"366fdafa-6776-4ab6-82b3-be300efc15de\") " Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.282220 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/366fdafa-6776-4ab6-82b3-be300efc15de-kube-api-access-kjhcj" (OuterVolumeSpecName: "kube-api-access-kjhcj") pod "366fdafa-6776-4ab6-82b3-be300efc15de" (UID: "366fdafa-6776-4ab6-82b3-be300efc15de"). InnerVolumeSpecName "kube-api-access-kjhcj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.313080 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "366fdafa-6776-4ab6-82b3-be300efc15de" (UID: "366fdafa-6776-4ab6-82b3-be300efc15de"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.326678 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-inventory" (OuterVolumeSpecName: "inventory") pod "366fdafa-6776-4ab6-82b3-be300efc15de" (UID: "366fdafa-6776-4ab6-82b3-be300efc15de"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.379080 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.379393 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/366fdafa-6776-4ab6-82b3-be300efc15de-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.379407 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kjhcj\" (UniqueName: \"kubernetes.io/projected/366fdafa-6776-4ab6-82b3-be300efc15de-kube-api-access-kjhcj\") on node \"crc\" DevicePath \"\"" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.511294 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" event={"ID":"366fdafa-6776-4ab6-82b3-be300efc15de","Type":"ContainerDied","Data":"a333cef243555b86e303f66f5f8febccb475687496b0a773bf3e71b5e01ea2c3"} Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.511344 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/redhat-edpm-deployment-openstack-edpm-ipam-xpxdz" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.511358 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a333cef243555b86e303f66f5f8febccb475687496b0a773bf3e71b5e01ea2c3" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.622861 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5"] Feb 27 17:21:37 crc kubenswrapper[4708]: E0227 17:21:37.623315 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="366fdafa-6776-4ab6-82b3-be300efc15de" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.623332 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="366fdafa-6776-4ab6-82b3-be300efc15de" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.623533 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="366fdafa-6776-4ab6-82b3-be300efc15de" containerName="redhat-edpm-deployment-openstack-edpm-ipam" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.624295 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.626726 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.626806 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.627261 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.627295 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.644534 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5"] Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.788908 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.789050 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.789151 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.789191 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-px6sd\" (UniqueName: \"kubernetes.io/projected/14f3f808-a956-4da2-a9b6-b355ff4e2726-kube-api-access-px6sd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.891584 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.891785 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.891943 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.892000 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-px6sd\" (UniqueName: \"kubernetes.io/projected/14f3f808-a956-4da2-a9b6-b355ff4e2726-kube-api-access-px6sd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.897175 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.897819 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-ssh-key-openstack-edpm-ipam\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.900754 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.918528 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-px6sd\" (UniqueName: \"kubernetes.io/projected/14f3f808-a956-4da2-a9b6-b355ff4e2726-kube-api-access-px6sd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:37 crc kubenswrapper[4708]: I0227 17:21:37.941503 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:21:38 crc kubenswrapper[4708]: I0227 17:21:38.229555 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:21:38 crc kubenswrapper[4708]: E0227 17:21:38.230648 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:21:38 crc kubenswrapper[4708]: I0227 17:21:38.682300 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5"] Feb 27 17:21:39 crc kubenswrapper[4708]: I0227 17:21:39.551578 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" event={"ID":"14f3f808-a956-4da2-a9b6-b355ff4e2726","Type":"ContainerStarted","Data":"47341bd6048481a80d063c0a2db36318bdebd2c08fa27fe00cf6e7e74ab98584"} Feb 27 17:21:40 crc kubenswrapper[4708]: I0227 17:21:40.588818 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" event={"ID":"14f3f808-a956-4da2-a9b6-b355ff4e2726","Type":"ContainerStarted","Data":"4f6bdaf36e6eaa8284234ecae0f240977778e9ebc32f702971de7cf2154fd90d"} Feb 27 17:21:40 crc kubenswrapper[4708]: I0227 17:21:40.607600 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" podStartSLOduration=2.8992780160000002 podStartE2EDuration="3.607584426s" podCreationTimestamp="2026-02-27 17:21:37 +0000 UTC" firstStartedPulling="2026-02-27 17:21:38.68573508 +0000 UTC m=+1697.201532667" lastFinishedPulling="2026-02-27 17:21:39.39404149 +0000 UTC m=+1697.909839077" observedRunningTime="2026-02-27 17:21:40.601413072 +0000 UTC m=+1699.117210659" watchObservedRunningTime="2026-02-27 17:21:40.607584426 +0000 UTC m=+1699.123382013" Feb 27 17:21:50 crc kubenswrapper[4708]: I0227 17:21:50.229827 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:21:50 crc kubenswrapper[4708]: E0227 17:21:50.231014 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:21:57 crc kubenswrapper[4708]: I0227 17:21:57.186432 4708 scope.go:117] "RemoveContainer" containerID="27e52105f4da2273e7614a61c44724eadc85f029309fd45d14d6569a0b898e67" Feb 27 17:21:57 crc kubenswrapper[4708]: I0227 17:21:57.232923 4708 scope.go:117] "RemoveContainer" containerID="d7204ca821ac865e56198e30bcef5ebc1e063f8442a6f07e4b265d43695a0680" Feb 27 17:21:57 crc kubenswrapper[4708]: I0227 17:21:57.292432 4708 scope.go:117] "RemoveContainer" containerID="b3417a0104cf53c156ef84707529fa10a92f57d5a47d891b57693c2658122b76" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.142960 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536882-ks67v"] Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.144434 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-ks67v" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.146564 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.147300 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.147829 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.161244 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-ks67v"] Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.248020 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prvgq\" (UniqueName: \"kubernetes.io/projected/c485e154-dd1d-463f-8ea0-3ccd02262055-kube-api-access-prvgq\") pod \"auto-csr-approver-29536882-ks67v\" (UID: \"c485e154-dd1d-463f-8ea0-3ccd02262055\") " pod="openshift-infra/auto-csr-approver-29536882-ks67v" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.350655 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prvgq\" (UniqueName: \"kubernetes.io/projected/c485e154-dd1d-463f-8ea0-3ccd02262055-kube-api-access-prvgq\") pod \"auto-csr-approver-29536882-ks67v\" (UID: \"c485e154-dd1d-463f-8ea0-3ccd02262055\") " pod="openshift-infra/auto-csr-approver-29536882-ks67v" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.369463 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prvgq\" (UniqueName: \"kubernetes.io/projected/c485e154-dd1d-463f-8ea0-3ccd02262055-kube-api-access-prvgq\") pod \"auto-csr-approver-29536882-ks67v\" (UID: \"c485e154-dd1d-463f-8ea0-3ccd02262055\") " pod="openshift-infra/auto-csr-approver-29536882-ks67v" Feb 27 17:22:00 crc kubenswrapper[4708]: I0227 17:22:00.465167 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-ks67v" Feb 27 17:22:01 crc kubenswrapper[4708]: I0227 17:22:01.051356 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-ks67v"] Feb 27 17:22:01 crc kubenswrapper[4708]: I0227 17:22:01.888577 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536882-ks67v" event={"ID":"c485e154-dd1d-463f-8ea0-3ccd02262055","Type":"ContainerStarted","Data":"b1637df1c30416ca9d8e83a8b214ff730c85ee8ab23663edeebaf6801b1a2e14"} Feb 27 17:22:02 crc kubenswrapper[4708]: I0227 17:22:02.903935 4708 generic.go:334] "Generic (PLEG): container finished" podID="c485e154-dd1d-463f-8ea0-3ccd02262055" containerID="cd5a4674d10a1a17cd90054ba1516ae342102cea189aafe41b687f1999821448" exitCode=0 Feb 27 17:22:02 crc kubenswrapper[4708]: I0227 17:22:02.904026 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536882-ks67v" event={"ID":"c485e154-dd1d-463f-8ea0-3ccd02262055","Type":"ContainerDied","Data":"cd5a4674d10a1a17cd90054ba1516ae342102cea189aafe41b687f1999821448"} Feb 27 17:22:04 crc kubenswrapper[4708]: I0227 17:22:04.380059 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-ks67v" Feb 27 17:22:04 crc kubenswrapper[4708]: I0227 17:22:04.456825 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prvgq\" (UniqueName: \"kubernetes.io/projected/c485e154-dd1d-463f-8ea0-3ccd02262055-kube-api-access-prvgq\") pod \"c485e154-dd1d-463f-8ea0-3ccd02262055\" (UID: \"c485e154-dd1d-463f-8ea0-3ccd02262055\") " Feb 27 17:22:04 crc kubenswrapper[4708]: I0227 17:22:04.462485 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c485e154-dd1d-463f-8ea0-3ccd02262055-kube-api-access-prvgq" (OuterVolumeSpecName: "kube-api-access-prvgq") pod "c485e154-dd1d-463f-8ea0-3ccd02262055" (UID: "c485e154-dd1d-463f-8ea0-3ccd02262055"). InnerVolumeSpecName "kube-api-access-prvgq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:22:04 crc kubenswrapper[4708]: I0227 17:22:04.559005 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prvgq\" (UniqueName: \"kubernetes.io/projected/c485e154-dd1d-463f-8ea0-3ccd02262055-kube-api-access-prvgq\") on node \"crc\" DevicePath \"\"" Feb 27 17:22:04 crc kubenswrapper[4708]: I0227 17:22:04.927276 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536882-ks67v" event={"ID":"c485e154-dd1d-463f-8ea0-3ccd02262055","Type":"ContainerDied","Data":"b1637df1c30416ca9d8e83a8b214ff730c85ee8ab23663edeebaf6801b1a2e14"} Feb 27 17:22:04 crc kubenswrapper[4708]: I0227 17:22:04.927320 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1637df1c30416ca9d8e83a8b214ff730c85ee8ab23663edeebaf6801b1a2e14" Feb 27 17:22:04 crc kubenswrapper[4708]: I0227 17:22:04.927369 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536882-ks67v" Feb 27 17:22:05 crc kubenswrapper[4708]: I0227 17:22:05.229623 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:22:05 crc kubenswrapper[4708]: E0227 17:22:05.230193 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:22:05 crc kubenswrapper[4708]: I0227 17:22:05.493974 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-xtn7r"] Feb 27 17:22:05 crc kubenswrapper[4708]: I0227 17:22:05.513577 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536876-xtn7r"] Feb 27 17:22:06 crc kubenswrapper[4708]: I0227 17:22:06.249740 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5534fd9b-068c-43dc-91af-a5014e8bdb24" path="/var/lib/kubelet/pods/5534fd9b-068c-43dc-91af-a5014e8bdb24/volumes" Feb 27 17:22:19 crc kubenswrapper[4708]: I0227 17:22:19.228134 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:22:19 crc kubenswrapper[4708]: E0227 17:22:19.228905 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:22:34 crc kubenswrapper[4708]: I0227 17:22:34.229195 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:22:34 crc kubenswrapper[4708]: E0227 17:22:34.231598 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:22:48 crc kubenswrapper[4708]: I0227 17:22:48.236198 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:22:48 crc kubenswrapper[4708]: E0227 17:22:48.237374 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:22:57 crc kubenswrapper[4708]: I0227 17:22:57.453134 4708 scope.go:117] "RemoveContainer" containerID="a2c31d3d0e0748b42c1e554b43420c60869f6ee7afbf8eff1040d8d11eaf06ac" Feb 27 17:22:57 crc kubenswrapper[4708]: I0227 17:22:57.492668 4708 scope.go:117] "RemoveContainer" containerID="000cf3a64ca9cac4dc0a575a3326daeb83f13abd60f5f2a76ee98caa21c0a485" Feb 27 17:22:57 crc kubenswrapper[4708]: I0227 17:22:57.563022 4708 scope.go:117] "RemoveContainer" containerID="82b628f51d5c712d7c99021fcb12ac29169fe79378d581b1d4fc244839d3b797" Feb 27 17:22:57 crc kubenswrapper[4708]: I0227 17:22:57.611167 4708 scope.go:117] "RemoveContainer" containerID="af15d0deceef92f05aed99432d465f8cad5a5660703d521bccc8a3ebae507d6a" Feb 27 17:23:03 crc kubenswrapper[4708]: I0227 17:23:03.229417 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:23:03 crc kubenswrapper[4708]: E0227 17:23:03.230922 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:23:18 crc kubenswrapper[4708]: I0227 17:23:18.228972 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:23:18 crc kubenswrapper[4708]: E0227 17:23:18.230347 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:23:29 crc kubenswrapper[4708]: I0227 17:23:29.228917 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:23:29 crc kubenswrapper[4708]: E0227 17:23:29.229763 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:23:42 crc kubenswrapper[4708]: I0227 17:23:42.229337 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:23:42 crc kubenswrapper[4708]: E0227 17:23:42.230428 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:23:57 crc kubenswrapper[4708]: I0227 17:23:57.232818 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:23:57 crc kubenswrapper[4708]: E0227 17:23:57.238194 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:23:57 crc kubenswrapper[4708]: I0227 17:23:57.762640 4708 scope.go:117] "RemoveContainer" containerID="c0826cfc80041253dde32adc62c0129e47e3f9f59f58071e7a30056235d0f416" Feb 27 17:23:57 crc kubenswrapper[4708]: I0227 17:23:57.810057 4708 scope.go:117] "RemoveContainer" containerID="1bd87f282992d16b9e22ffb4e7bc9789b5a23a1e8ab43aa0ed8d87fbf488b390" Feb 27 17:23:57 crc kubenswrapper[4708]: I0227 17:23:57.838261 4708 scope.go:117] "RemoveContainer" containerID="88fec4da7b80600e36ed3573e1898c2c90c1850824d336ef91df8763e551a0db" Feb 27 17:23:57 crc kubenswrapper[4708]: I0227 17:23:57.889930 4708 scope.go:117] "RemoveContainer" containerID="e768f6c202a42cf223a6c5ebae7a5124171aa2f15f8fc231fc07edab9677ad47" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.180209 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536884-tr5f9"] Feb 27 17:24:00 crc kubenswrapper[4708]: E0227 17:24:00.181307 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c485e154-dd1d-463f-8ea0-3ccd02262055" containerName="oc" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.181331 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c485e154-dd1d-463f-8ea0-3ccd02262055" containerName="oc" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.181701 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c485e154-dd1d-463f-8ea0-3ccd02262055" containerName="oc" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.182891 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.185791 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.191733 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.192023 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.192460 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-tr5f9"] Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.296209 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76k52\" (UniqueName: \"kubernetes.io/projected/fb7e8057-bcf9-47a0-adfb-85f3ff61ac21-kube-api-access-76k52\") pod \"auto-csr-approver-29536884-tr5f9\" (UID: \"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21\") " pod="openshift-infra/auto-csr-approver-29536884-tr5f9" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.398382 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-76k52\" (UniqueName: \"kubernetes.io/projected/fb7e8057-bcf9-47a0-adfb-85f3ff61ac21-kube-api-access-76k52\") pod \"auto-csr-approver-29536884-tr5f9\" (UID: \"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21\") " pod="openshift-infra/auto-csr-approver-29536884-tr5f9" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.429145 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-76k52\" (UniqueName: \"kubernetes.io/projected/fb7e8057-bcf9-47a0-adfb-85f3ff61ac21-kube-api-access-76k52\") pod \"auto-csr-approver-29536884-tr5f9\" (UID: \"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21\") " pod="openshift-infra/auto-csr-approver-29536884-tr5f9" Feb 27 17:24:00 crc kubenswrapper[4708]: I0227 17:24:00.507642 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" Feb 27 17:24:01 crc kubenswrapper[4708]: W0227 17:24:01.063578 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb7e8057_bcf9_47a0_adfb_85f3ff61ac21.slice/crio-4ae6364b4fb17f91ce9d04369b051599d79faa2db37e58755ddf41b0a233a6b5 WatchSource:0}: Error finding container 4ae6364b4fb17f91ce9d04369b051599d79faa2db37e58755ddf41b0a233a6b5: Status 404 returned error can't find the container with id 4ae6364b4fb17f91ce9d04369b051599d79faa2db37e58755ddf41b0a233a6b5 Feb 27 17:24:01 crc kubenswrapper[4708]: I0227 17:24:01.070247 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-tr5f9"] Feb 27 17:24:01 crc kubenswrapper[4708]: I0227 17:24:01.543348 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" event={"ID":"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21","Type":"ContainerStarted","Data":"4ae6364b4fb17f91ce9d04369b051599d79faa2db37e58755ddf41b0a233a6b5"} Feb 27 17:24:02 crc kubenswrapper[4708]: I0227 17:24:02.566002 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" event={"ID":"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21","Type":"ContainerStarted","Data":"e6a3eb2e2350a21c58cc2a889616119b1b5a2a54bc93e1ad35425a674f98af6d"} Feb 27 17:24:02 crc kubenswrapper[4708]: I0227 17:24:02.585445 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" podStartSLOduration=1.565322186 podStartE2EDuration="2.585426769s" podCreationTimestamp="2026-02-27 17:24:00 +0000 UTC" firstStartedPulling="2026-02-27 17:24:01.066225508 +0000 UTC m=+1839.582023105" lastFinishedPulling="2026-02-27 17:24:02.086330091 +0000 UTC m=+1840.602127688" observedRunningTime="2026-02-27 17:24:02.579922025 +0000 UTC m=+1841.095719652" watchObservedRunningTime="2026-02-27 17:24:02.585426769 +0000 UTC m=+1841.101224356" Feb 27 17:24:03 crc kubenswrapper[4708]: I0227 17:24:03.579118 4708 generic.go:334] "Generic (PLEG): container finished" podID="fb7e8057-bcf9-47a0-adfb-85f3ff61ac21" containerID="e6a3eb2e2350a21c58cc2a889616119b1b5a2a54bc93e1ad35425a674f98af6d" exitCode=0 Feb 27 17:24:03 crc kubenswrapper[4708]: I0227 17:24:03.579187 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" event={"ID":"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21","Type":"ContainerDied","Data":"e6a3eb2e2350a21c58cc2a889616119b1b5a2a54bc93e1ad35425a674f98af6d"} Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.057963 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.211730 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-76k52\" (UniqueName: \"kubernetes.io/projected/fb7e8057-bcf9-47a0-adfb-85f3ff61ac21-kube-api-access-76k52\") pod \"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21\" (UID: \"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21\") " Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.219887 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb7e8057-bcf9-47a0-adfb-85f3ff61ac21-kube-api-access-76k52" (OuterVolumeSpecName: "kube-api-access-76k52") pod "fb7e8057-bcf9-47a0-adfb-85f3ff61ac21" (UID: "fb7e8057-bcf9-47a0-adfb-85f3ff61ac21"). InnerVolumeSpecName "kube-api-access-76k52". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.318922 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-76k52\" (UniqueName: \"kubernetes.io/projected/fb7e8057-bcf9-47a0-adfb-85f3ff61ac21-kube-api-access-76k52\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.343724 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-fv4q5"] Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.353249 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536878-fv4q5"] Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.601339 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" event={"ID":"fb7e8057-bcf9-47a0-adfb-85f3ff61ac21","Type":"ContainerDied","Data":"4ae6364b4fb17f91ce9d04369b051599d79faa2db37e58755ddf41b0a233a6b5"} Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.601378 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ae6364b4fb17f91ce9d04369b051599d79faa2db37e58755ddf41b0a233a6b5" Feb 27 17:24:05 crc kubenswrapper[4708]: I0227 17:24:05.601443 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536884-tr5f9" Feb 27 17:24:06 crc kubenswrapper[4708]: I0227 17:24:06.241342 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da11d788-6fb8-42b3-bdcd-4228dde954c3" path="/var/lib/kubelet/pods/da11d788-6fb8-42b3-bdcd-4228dde954c3/volumes" Feb 27 17:24:09 crc kubenswrapper[4708]: I0227 17:24:09.229187 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:24:09 crc kubenswrapper[4708]: E0227 17:24:09.230406 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:24:22 crc kubenswrapper[4708]: I0227 17:24:22.244256 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:24:22 crc kubenswrapper[4708]: E0227 17:24:22.248023 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:24:33 crc kubenswrapper[4708]: I0227 17:24:33.229359 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:24:33 crc kubenswrapper[4708]: E0227 17:24:33.230498 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.232430 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-96fz5"] Feb 27 17:24:35 crc kubenswrapper[4708]: E0227 17:24:35.233776 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb7e8057-bcf9-47a0-adfb-85f3ff61ac21" containerName="oc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.233810 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb7e8057-bcf9-47a0-adfb-85f3ff61ac21" containerName="oc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.234338 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb7e8057-bcf9-47a0-adfb-85f3ff61ac21" containerName="oc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.238099 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.252636 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-96fz5"] Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.366396 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-catalog-content\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.367556 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-utilities\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.367605 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flw5z\" (UniqueName: \"kubernetes.io/projected/f3f957a1-f8b3-4b2f-b214-7fdb967562af-kube-api-access-flw5z\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.421462 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bmktc"] Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.425216 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.434197 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmktc"] Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.470157 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-catalog-content\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.470383 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-catalog-content\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.470528 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-utilities\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.470670 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dsxkp\" (UniqueName: \"kubernetes.io/projected/c0523852-7b81-444b-b9b1-517a1ca2eaf7-kube-api-access-dsxkp\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.470901 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-utilities\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.471030 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flw5z\" (UniqueName: \"kubernetes.io/projected/f3f957a1-f8b3-4b2f-b214-7fdb967562af-kube-api-access-flw5z\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.471591 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-catalog-content\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.471742 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-utilities\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.489905 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flw5z\" (UniqueName: \"kubernetes.io/projected/f3f957a1-f8b3-4b2f-b214-7fdb967562af-kube-api-access-flw5z\") pod \"community-operators-96fz5\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.570629 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.572323 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-catalog-content\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.572398 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-utilities\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.572456 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dsxkp\" (UniqueName: \"kubernetes.io/projected/c0523852-7b81-444b-b9b1-517a1ca2eaf7-kube-api-access-dsxkp\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.572989 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-utilities\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.573231 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-catalog-content\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.591127 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsxkp\" (UniqueName: \"kubernetes.io/projected/c0523852-7b81-444b-b9b1-517a1ca2eaf7-kube-api-access-dsxkp\") pod \"redhat-marketplace-bmktc\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:35 crc kubenswrapper[4708]: I0227 17:24:35.759131 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.034019 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-96fz5"] Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.215455 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmktc"] Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.991642 4708 generic.go:334] "Generic (PLEG): container finished" podID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerID="c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486" exitCode=0 Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.991740 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96fz5" event={"ID":"f3f957a1-f8b3-4b2f-b214-7fdb967562af","Type":"ContainerDied","Data":"c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486"} Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.991976 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96fz5" event={"ID":"f3f957a1-f8b3-4b2f-b214-7fdb967562af","Type":"ContainerStarted","Data":"08626d97502c5d490e014206c15c76fe72f15794033c6aabb5e6f1ab40b6c224"} Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.993248 4708 generic.go:334] "Generic (PLEG): container finished" podID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerID="06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53" exitCode=0 Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.993282 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmktc" event={"ID":"c0523852-7b81-444b-b9b1-517a1ca2eaf7","Type":"ContainerDied","Data":"06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53"} Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.993313 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmktc" event={"ID":"c0523852-7b81-444b-b9b1-517a1ca2eaf7","Type":"ContainerStarted","Data":"06bc1bca3f504a0a089fe36f5f02500a1858c15465f8ceb01b911cca9b6a3e4f"} Feb 27 17:24:36 crc kubenswrapper[4708]: I0227 17:24:36.994464 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:24:38 crc kubenswrapper[4708]: I0227 17:24:38.007396 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96fz5" event={"ID":"f3f957a1-f8b3-4b2f-b214-7fdb967562af","Type":"ContainerStarted","Data":"24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e"} Feb 27 17:24:39 crc kubenswrapper[4708]: I0227 17:24:39.016965 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmktc" event={"ID":"c0523852-7b81-444b-b9b1-517a1ca2eaf7","Type":"ContainerStarted","Data":"ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708"} Feb 27 17:24:39 crc kubenswrapper[4708]: I0227 17:24:39.020815 4708 generic.go:334] "Generic (PLEG): container finished" podID="14f3f808-a956-4da2-a9b6-b355ff4e2726" containerID="4f6bdaf36e6eaa8284234ecae0f240977778e9ebc32f702971de7cf2154fd90d" exitCode=0 Feb 27 17:24:39 crc kubenswrapper[4708]: I0227 17:24:39.020896 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" event={"ID":"14f3f808-a956-4da2-a9b6-b355ff4e2726","Type":"ContainerDied","Data":"4f6bdaf36e6eaa8284234ecae0f240977778e9ebc32f702971de7cf2154fd90d"} Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.602114 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.715255 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-bootstrap-combined-ca-bundle\") pod \"14f3f808-a956-4da2-a9b6-b355ff4e2726\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.715439 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-px6sd\" (UniqueName: \"kubernetes.io/projected/14f3f808-a956-4da2-a9b6-b355ff4e2726-kube-api-access-px6sd\") pod \"14f3f808-a956-4da2-a9b6-b355ff4e2726\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.715471 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-inventory\") pod \"14f3f808-a956-4da2-a9b6-b355ff4e2726\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.715667 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-ssh-key-openstack-edpm-ipam\") pod \"14f3f808-a956-4da2-a9b6-b355ff4e2726\" (UID: \"14f3f808-a956-4da2-a9b6-b355ff4e2726\") " Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.734160 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14f3f808-a956-4da2-a9b6-b355ff4e2726-kube-api-access-px6sd" (OuterVolumeSpecName: "kube-api-access-px6sd") pod "14f3f808-a956-4da2-a9b6-b355ff4e2726" (UID: "14f3f808-a956-4da2-a9b6-b355ff4e2726"). InnerVolumeSpecName "kube-api-access-px6sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.744563 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-inventory" (OuterVolumeSpecName: "inventory") pod "14f3f808-a956-4da2-a9b6-b355ff4e2726" (UID: "14f3f808-a956-4da2-a9b6-b355ff4e2726"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.746045 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "14f3f808-a956-4da2-a9b6-b355ff4e2726" (UID: "14f3f808-a956-4da2-a9b6-b355ff4e2726"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.751424 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "14f3f808-a956-4da2-a9b6-b355ff4e2726" (UID: "14f3f808-a956-4da2-a9b6-b355ff4e2726"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.817992 4708 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.818191 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-px6sd\" (UniqueName: \"kubernetes.io/projected/14f3f808-a956-4da2-a9b6-b355ff4e2726-kube-api-access-px6sd\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.818249 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:40 crc kubenswrapper[4708]: I0227 17:24:40.818322 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/14f3f808-a956-4da2-a9b6-b355ff4e2726-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.073604 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" event={"ID":"14f3f808-a956-4da2-a9b6-b355ff4e2726","Type":"ContainerDied","Data":"47341bd6048481a80d063c0a2db36318bdebd2c08fa27fe00cf6e7e74ab98584"} Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.073890 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47341bd6048481a80d063c0a2db36318bdebd2c08fa27fe00cf6e7e74ab98584" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.073664 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.189685 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw"] Feb 27 17:24:41 crc kubenswrapper[4708]: E0227 17:24:41.195537 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14f3f808-a956-4da2-a9b6-b355ff4e2726" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.195567 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="14f3f808-a956-4da2-a9b6-b355ff4e2726" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.195862 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="14f3f808-a956-4da2-a9b6-b355ff4e2726" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.196731 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.199543 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.199728 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.200729 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.201265 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.210970 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw"] Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.326427 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rqm5\" (UniqueName: \"kubernetes.io/projected/378dc842-8c5d-4882-ab1f-3f89e1ed250b-kube-api-access-2rqm5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.326892 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.327098 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.429140 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rqm5\" (UniqueName: \"kubernetes.io/projected/378dc842-8c5d-4882-ab1f-3f89e1ed250b-kube-api-access-2rqm5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.429214 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.429284 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.434867 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-ssh-key-openstack-edpm-ipam\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.444394 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-inventory\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.455059 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rqm5\" (UniqueName: \"kubernetes.io/projected/378dc842-8c5d-4882-ab1f-3f89e1ed250b-kube-api-access-2rqm5\") pod \"download-cache-edpm-deployment-openstack-edpm-ipam-hclxw\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:41 crc kubenswrapper[4708]: I0227 17:24:41.517559 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:24:42 crc kubenswrapper[4708]: I0227 17:24:42.083899 4708 generic.go:334] "Generic (PLEG): container finished" podID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerID="24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e" exitCode=0 Feb 27 17:24:42 crc kubenswrapper[4708]: I0227 17:24:42.083983 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96fz5" event={"ID":"f3f957a1-f8b3-4b2f-b214-7fdb967562af","Type":"ContainerDied","Data":"24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e"} Feb 27 17:24:42 crc kubenswrapper[4708]: I0227 17:24:42.094566 4708 generic.go:334] "Generic (PLEG): container finished" podID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerID="ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708" exitCode=0 Feb 27 17:24:42 crc kubenswrapper[4708]: I0227 17:24:42.094604 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmktc" event={"ID":"c0523852-7b81-444b-b9b1-517a1ca2eaf7","Type":"ContainerDied","Data":"ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708"} Feb 27 17:24:42 crc kubenswrapper[4708]: I0227 17:24:42.109642 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw"] Feb 27 17:24:43 crc kubenswrapper[4708]: I0227 17:24:43.128989 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmktc" event={"ID":"c0523852-7b81-444b-b9b1-517a1ca2eaf7","Type":"ContainerStarted","Data":"aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1"} Feb 27 17:24:43 crc kubenswrapper[4708]: I0227 17:24:43.145463 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" event={"ID":"378dc842-8c5d-4882-ab1f-3f89e1ed250b","Type":"ContainerStarted","Data":"c0ce01b5158213ae4d30da5264c0567e0ebb76353a895dc5ee18fb6886449cc1"} Feb 27 17:24:43 crc kubenswrapper[4708]: I0227 17:24:43.145510 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" event={"ID":"378dc842-8c5d-4882-ab1f-3f89e1ed250b","Type":"ContainerStarted","Data":"6e3e59ecc3cb0cfa0be40faae11744d23fd7820d95f289f8a1cd1dea77d0a89f"} Feb 27 17:24:43 crc kubenswrapper[4708]: I0227 17:24:43.169715 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bmktc" podStartSLOduration=2.611538022 podStartE2EDuration="8.169696051s" podCreationTimestamp="2026-02-27 17:24:35 +0000 UTC" firstStartedPulling="2026-02-27 17:24:36.994347195 +0000 UTC m=+1875.510144782" lastFinishedPulling="2026-02-27 17:24:42.552505224 +0000 UTC m=+1881.068302811" observedRunningTime="2026-02-27 17:24:43.167362315 +0000 UTC m=+1881.683159902" watchObservedRunningTime="2026-02-27 17:24:43.169696051 +0000 UTC m=+1881.685493638" Feb 27 17:24:43 crc kubenswrapper[4708]: I0227 17:24:43.171031 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96fz5" event={"ID":"f3f957a1-f8b3-4b2f-b214-7fdb967562af","Type":"ContainerStarted","Data":"5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7"} Feb 27 17:24:43 crc kubenswrapper[4708]: I0227 17:24:43.197228 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" podStartSLOduration=1.731475776 podStartE2EDuration="2.197210301s" podCreationTimestamp="2026-02-27 17:24:41 +0000 UTC" firstStartedPulling="2026-02-27 17:24:42.125275447 +0000 UTC m=+1880.641073044" lastFinishedPulling="2026-02-27 17:24:42.591009972 +0000 UTC m=+1881.106807569" observedRunningTime="2026-02-27 17:24:43.197117179 +0000 UTC m=+1881.712914766" watchObservedRunningTime="2026-02-27 17:24:43.197210301 +0000 UTC m=+1881.713007888" Feb 27 17:24:43 crc kubenswrapper[4708]: I0227 17:24:43.324605 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-96fz5" podStartSLOduration=2.6994040630000002 podStartE2EDuration="8.324586789s" podCreationTimestamp="2026-02-27 17:24:35 +0000 UTC" firstStartedPulling="2026-02-27 17:24:36.994075127 +0000 UTC m=+1875.509872744" lastFinishedPulling="2026-02-27 17:24:42.619257863 +0000 UTC m=+1881.135055470" observedRunningTime="2026-02-27 17:24:43.247028797 +0000 UTC m=+1881.762826384" watchObservedRunningTime="2026-02-27 17:24:43.324586789 +0000 UTC m=+1881.840384376" Feb 27 17:24:45 crc kubenswrapper[4708]: I0227 17:24:45.571354 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:45 crc kubenswrapper[4708]: I0227 17:24:45.572461 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:45 crc kubenswrapper[4708]: I0227 17:24:45.759670 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:45 crc kubenswrapper[4708]: I0227 17:24:45.760685 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:45 crc kubenswrapper[4708]: I0227 17:24:45.814455 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:46 crc kubenswrapper[4708]: I0227 17:24:46.628960 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-96fz5" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="registry-server" probeResult="failure" output=< Feb 27 17:24:46 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:24:46 crc kubenswrapper[4708]: > Feb 27 17:24:47 crc kubenswrapper[4708]: I0227 17:24:47.252749 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:48 crc kubenswrapper[4708]: I0227 17:24:48.229063 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:24:48 crc kubenswrapper[4708]: E0227 17:24:48.230145 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:24:48 crc kubenswrapper[4708]: I0227 17:24:48.410030 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmktc"] Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.228667 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bmktc" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="registry-server" containerID="cri-o://aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1" gracePeriod=2 Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.867782 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.924053 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-utilities\") pod \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.924232 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dsxkp\" (UniqueName: \"kubernetes.io/projected/c0523852-7b81-444b-b9b1-517a1ca2eaf7-kube-api-access-dsxkp\") pod \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.924411 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-catalog-content\") pod \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\" (UID: \"c0523852-7b81-444b-b9b1-517a1ca2eaf7\") " Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.925536 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-utilities" (OuterVolumeSpecName: "utilities") pod "c0523852-7b81-444b-b9b1-517a1ca2eaf7" (UID: "c0523852-7b81-444b-b9b1-517a1ca2eaf7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.933628 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0523852-7b81-444b-b9b1-517a1ca2eaf7-kube-api-access-dsxkp" (OuterVolumeSpecName: "kube-api-access-dsxkp") pod "c0523852-7b81-444b-b9b1-517a1ca2eaf7" (UID: "c0523852-7b81-444b-b9b1-517a1ca2eaf7"). InnerVolumeSpecName "kube-api-access-dsxkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:24:49 crc kubenswrapper[4708]: I0227 17:24:49.951222 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c0523852-7b81-444b-b9b1-517a1ca2eaf7" (UID: "c0523852-7b81-444b-b9b1-517a1ca2eaf7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.026553 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.026580 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dsxkp\" (UniqueName: \"kubernetes.io/projected/c0523852-7b81-444b-b9b1-517a1ca2eaf7-kube-api-access-dsxkp\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.026590 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c0523852-7b81-444b-b9b1-517a1ca2eaf7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.243352 4708 generic.go:334] "Generic (PLEG): container finished" podID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerID="aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1" exitCode=0 Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.243420 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmktc" event={"ID":"c0523852-7b81-444b-b9b1-517a1ca2eaf7","Type":"ContainerDied","Data":"aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1"} Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.243488 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bmktc" event={"ID":"c0523852-7b81-444b-b9b1-517a1ca2eaf7","Type":"ContainerDied","Data":"06bc1bca3f504a0a089fe36f5f02500a1858c15465f8ceb01b911cca9b6a3e4f"} Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.243489 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bmktc" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.243519 4708 scope.go:117] "RemoveContainer" containerID="aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.277746 4708 scope.go:117] "RemoveContainer" containerID="ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.282752 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmktc"] Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.293532 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bmktc"] Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.309956 4708 scope.go:117] "RemoveContainer" containerID="06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.358182 4708 scope.go:117] "RemoveContainer" containerID="aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1" Feb 27 17:24:50 crc kubenswrapper[4708]: E0227 17:24:50.358655 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1\": container with ID starting with aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1 not found: ID does not exist" containerID="aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.358694 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1"} err="failed to get container status \"aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1\": rpc error: code = NotFound desc = could not find container \"aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1\": container with ID starting with aea42af391e139a7f66cbce3c98957abc544c0b37957cb60ed131f42c00e02d1 not found: ID does not exist" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.358723 4708 scope.go:117] "RemoveContainer" containerID="ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708" Feb 27 17:24:50 crc kubenswrapper[4708]: E0227 17:24:50.359194 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708\": container with ID starting with ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708 not found: ID does not exist" containerID="ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.359228 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708"} err="failed to get container status \"ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708\": rpc error: code = NotFound desc = could not find container \"ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708\": container with ID starting with ec208b97baf385cd057e53188a6110986bbda8ff37a739c5774ea00e17276708 not found: ID does not exist" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.359248 4708 scope.go:117] "RemoveContainer" containerID="06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53" Feb 27 17:24:50 crc kubenswrapper[4708]: E0227 17:24:50.359467 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53\": container with ID starting with 06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53 not found: ID does not exist" containerID="06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53" Feb 27 17:24:50 crc kubenswrapper[4708]: I0227 17:24:50.359494 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53"} err="failed to get container status \"06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53\": rpc error: code = NotFound desc = could not find container \"06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53\": container with ID starting with 06f8a4eb9339f7af0a08ab75e6de8fa6cefed3800983f48f75ec3e6b4ce66d53 not found: ID does not exist" Feb 27 17:24:52 crc kubenswrapper[4708]: I0227 17:24:52.244210 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" path="/var/lib/kubelet/pods/c0523852-7b81-444b-b9b1-517a1ca2eaf7/volumes" Feb 27 17:24:55 crc kubenswrapper[4708]: I0227 17:24:55.639985 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:55 crc kubenswrapper[4708]: I0227 17:24:55.714350 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:55 crc kubenswrapper[4708]: I0227 17:24:55.882589 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-96fz5"] Feb 27 17:24:57 crc kubenswrapper[4708]: I0227 17:24:57.328511 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-96fz5" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="registry-server" containerID="cri-o://5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7" gracePeriod=2 Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.023917 4708 scope.go:117] "RemoveContainer" containerID="69410d999819e5a1f1752ac9a4a43cf0f85d5fb8cc17128335ebe0b607ca5ece" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.043186 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.055380 4708 scope.go:117] "RemoveContainer" containerID="530584b49d7d7a5b4eccc55282fad1634c2c8ffccc12cf36c53ea4d3db030e3e" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.084198 4708 scope.go:117] "RemoveContainer" containerID="9a03affd3af6b8eb400d98166887110e7e2c7635a53dda9906a8ed2e0ddae35d" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.105006 4708 scope.go:117] "RemoveContainer" containerID="b53ac37eb796c474a372bb2fb0eb15a25c785f9bf4f55d3d8bee5ec2e99f6e62" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.121786 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-utilities\") pod \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.121935 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-catalog-content\") pod \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.122048 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flw5z\" (UniqueName: \"kubernetes.io/projected/f3f957a1-f8b3-4b2f-b214-7fdb967562af-kube-api-access-flw5z\") pod \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\" (UID: \"f3f957a1-f8b3-4b2f-b214-7fdb967562af\") " Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.122750 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-utilities" (OuterVolumeSpecName: "utilities") pod "f3f957a1-f8b3-4b2f-b214-7fdb967562af" (UID: "f3f957a1-f8b3-4b2f-b214-7fdb967562af"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.125395 4708 scope.go:117] "RemoveContainer" containerID="8b8ddac9c35a192fa3d28063f08a0c7b300e04795e1b185c1c354c4aaf512a9b" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.130085 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3f957a1-f8b3-4b2f-b214-7fdb967562af-kube-api-access-flw5z" (OuterVolumeSpecName: "kube-api-access-flw5z") pod "f3f957a1-f8b3-4b2f-b214-7fdb967562af" (UID: "f3f957a1-f8b3-4b2f-b214-7fdb967562af"). InnerVolumeSpecName "kube-api-access-flw5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.184202 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3f957a1-f8b3-4b2f-b214-7fdb967562af" (UID: "f3f957a1-f8b3-4b2f-b214-7fdb967562af"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.200741 4708 scope.go:117] "RemoveContainer" containerID="4849b0f7eac085b0fa6889fe5a042ff990ae5d3e248647129d269329c2c11095" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.224889 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flw5z\" (UniqueName: \"kubernetes.io/projected/f3f957a1-f8b3-4b2f-b214-7fdb967562af-kube-api-access-flw5z\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.224924 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.224935 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3f957a1-f8b3-4b2f-b214-7fdb967562af-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.235547 4708 scope.go:117] "RemoveContainer" containerID="09bba017e9ea0abb9ec3f38d9db19e439b09eb4cc97785bf47b5f8b2572e71f1" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.262401 4708 scope.go:117] "RemoveContainer" containerID="460b651d793f3c91cae760c0edb1b0a5f7a3a7025aa2af33d22c631a4d561d5b" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.301879 4708 scope.go:117] "RemoveContainer" containerID="44a9e2e848a4713daf5179ee089abb0edcbe3147214bce618b5fc5d4c52ec523" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.339817 4708 scope.go:117] "RemoveContainer" containerID="b1693c5fcb539856f2b1d6c2ae05787957cfee80f5e43a1acc1b45700050d6bb" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.352925 4708 generic.go:334] "Generic (PLEG): container finished" podID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerID="5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7" exitCode=0 Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.352971 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96fz5" event={"ID":"f3f957a1-f8b3-4b2f-b214-7fdb967562af","Type":"ContainerDied","Data":"5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7"} Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.353002 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-96fz5" event={"ID":"f3f957a1-f8b3-4b2f-b214-7fdb967562af","Type":"ContainerDied","Data":"08626d97502c5d490e014206c15c76fe72f15794033c6aabb5e6f1ab40b6c224"} Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.353014 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-96fz5" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.353024 4708 scope.go:117] "RemoveContainer" containerID="5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.364202 4708 scope.go:117] "RemoveContainer" containerID="c12b0eb6d81db8eb422e9b052ba45dc99776f9f251467608cfd43ea1104725df" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.384284 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-96fz5"] Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.394810 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-96fz5"] Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.404094 4708 scope.go:117] "RemoveContainer" containerID="24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.446775 4708 scope.go:117] "RemoveContainer" containerID="f1dbc32b3a2081d2ec4ab558d40443793d478d6e4e5458c524470310e4b81c00" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.472971 4708 scope.go:117] "RemoveContainer" containerID="c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.538826 4708 scope.go:117] "RemoveContainer" containerID="5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7" Feb 27 17:24:58 crc kubenswrapper[4708]: E0227 17:24:58.540200 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7\": container with ID starting with 5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7 not found: ID does not exist" containerID="5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.540245 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7"} err="failed to get container status \"5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7\": rpc error: code = NotFound desc = could not find container \"5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7\": container with ID starting with 5b819a63b0703358c828133c62d9947a33f028465ba8aae832e867ce280a1fd7 not found: ID does not exist" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.540275 4708 scope.go:117] "RemoveContainer" containerID="24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e" Feb 27 17:24:58 crc kubenswrapper[4708]: E0227 17:24:58.540808 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e\": container with ID starting with 24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e not found: ID does not exist" containerID="24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.540867 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e"} err="failed to get container status \"24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e\": rpc error: code = NotFound desc = could not find container \"24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e\": container with ID starting with 24db1900431edde26f43307acef8db92e0022e4314761ab5361a9adc2977819e not found: ID does not exist" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.540895 4708 scope.go:117] "RemoveContainer" containerID="c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486" Feb 27 17:24:58 crc kubenswrapper[4708]: E0227 17:24:58.541236 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486\": container with ID starting with c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486 not found: ID does not exist" containerID="c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486" Feb 27 17:24:58 crc kubenswrapper[4708]: I0227 17:24:58.541277 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486"} err="failed to get container status \"c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486\": rpc error: code = NotFound desc = could not find container \"c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486\": container with ID starting with c53a57c6f395c3340f1662825e2dd03df26cee91e1211645a0d780d83c350486 not found: ID does not exist" Feb 27 17:25:00 crc kubenswrapper[4708]: I0227 17:25:00.243823 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" path="/var/lib/kubelet/pods/f3f957a1-f8b3-4b2f-b214-7fdb967562af/volumes" Feb 27 17:25:02 crc kubenswrapper[4708]: I0227 17:25:02.245151 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:25:02 crc kubenswrapper[4708]: E0227 17:25:02.246563 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:25:15 crc kubenswrapper[4708]: I0227 17:25:15.228444 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:25:15 crc kubenswrapper[4708]: I0227 17:25:15.571125 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"fc64fcd853be9a08f141cf8d2540773fd0f62639171cb2f54c41087f21e9f447"} Feb 27 17:25:45 crc kubenswrapper[4708]: I0227 17:25:45.088834 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-752e-account-create-update-r66l6"] Feb 27 17:25:45 crc kubenswrapper[4708]: I0227 17:25:45.100798 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-dcz66"] Feb 27 17:25:45 crc kubenswrapper[4708]: I0227 17:25:45.112797 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-dcz66"] Feb 27 17:25:45 crc kubenswrapper[4708]: I0227 17:25:45.122976 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-752e-account-create-update-r66l6"] Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.064691 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-b7e7-account-create-update-985br"] Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.079905 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-v4pxq"] Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.096616 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-b7e7-account-create-update-985br"] Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.113240 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-v4pxq"] Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.246556 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cff7ba0-90d0-441b-ab1a-9d30c9f29e28" path="/var/lib/kubelet/pods/0cff7ba0-90d0-441b-ab1a-9d30c9f29e28/volumes" Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.248967 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49315bcd-31dd-4e2e-8874-12904298dba9" path="/var/lib/kubelet/pods/49315bcd-31dd-4e2e-8874-12904298dba9/volumes" Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.252000 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588e4d1d-d7fe-425f-9fe3-032b1afd18eb" path="/var/lib/kubelet/pods/588e4d1d-d7fe-425f-9fe3-032b1afd18eb/volumes" Feb 27 17:25:46 crc kubenswrapper[4708]: I0227 17:25:46.252793 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afe8f07a-d35a-4288-a552-9351a6ad0079" path="/var/lib/kubelet/pods/afe8f07a-d35a-4288-a552-9351a6ad0079/volumes" Feb 27 17:25:50 crc kubenswrapper[4708]: I0227 17:25:50.047783 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-27ba-account-create-update-k2rtk"] Feb 27 17:25:50 crc kubenswrapper[4708]: I0227 17:25:50.067878 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-xjljq"] Feb 27 17:25:50 crc kubenswrapper[4708]: I0227 17:25:50.087995 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-xjljq"] Feb 27 17:25:50 crc kubenswrapper[4708]: I0227 17:25:50.088064 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-27ba-account-create-update-k2rtk"] Feb 27 17:25:50 crc kubenswrapper[4708]: I0227 17:25:50.252005 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18c1e66f-ed02-4bf8-be04-bf5d722eb5a1" path="/var/lib/kubelet/pods/18c1e66f-ed02-4bf8-be04-bf5d722eb5a1/volumes" Feb 27 17:25:50 crc kubenswrapper[4708]: I0227 17:25:50.254405 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a" path="/var/lib/kubelet/pods/a9cdcf07-4f86-4f69-bc25-2e4f7841fb2a/volumes" Feb 27 17:25:52 crc kubenswrapper[4708]: I0227 17:25:52.032958 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-nv8ss"] Feb 27 17:25:52 crc kubenswrapper[4708]: I0227 17:25:52.041767 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-nv8ss"] Feb 27 17:25:52 crc kubenswrapper[4708]: I0227 17:25:52.251743 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bcf0e9a-a14c-4b1f-8406-22719bee5979" path="/var/lib/kubelet/pods/6bcf0e9a-a14c-4b1f-8406-22719bee5979/volumes" Feb 27 17:25:53 crc kubenswrapper[4708]: I0227 17:25:53.037627 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-create-qdbv7"] Feb 27 17:25:53 crc kubenswrapper[4708]: I0227 17:25:53.054968 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-kqhws"] Feb 27 17:25:53 crc kubenswrapper[4708]: I0227 17:25:53.066825 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-create-qdbv7"] Feb 27 17:25:53 crc kubenswrapper[4708]: I0227 17:25:53.076202 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-kqhws"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.038812 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-4zfxn"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.056714 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-4zfxn"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.071180 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-2006-account-create-update-njkx2"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.084259 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-7f00-account-create-update-g5jdv"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.092321 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-mq627"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.121678 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-2006-account-create-update-njkx2"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.138747 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-mq627"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.151884 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-7f00-account-create-update-g5jdv"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.160881 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-7abe-account-create-update-649dm"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.168713 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-136d-account-create-update-pwh4j"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.176812 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-7abe-account-create-update-649dm"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.185506 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-136d-account-create-update-pwh4j"] Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.246806 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="115ecd43-9912-4bf4-933f-4fa0497f0a9d" path="/var/lib/kubelet/pods/115ecd43-9912-4bf4-933f-4fa0497f0a9d/volumes" Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.257937 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76927836-595f-41d2-ba31-e1e4de928b09" path="/var/lib/kubelet/pods/76927836-595f-41d2-ba31-e1e4de928b09/volumes" Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.260550 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87b33cfd-36db-424a-9225-a9a35b8a8562" path="/var/lib/kubelet/pods/87b33cfd-36db-424a-9225-a9a35b8a8562/volumes" Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.262327 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1bfca09-eb7d-485b-97b2-84ba0df72b73" path="/var/lib/kubelet/pods/b1bfca09-eb7d-485b-97b2-84ba0df72b73/volumes" Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.266145 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4c4ff25-5692-417b-bd4c-53fb2cbedba7" path="/var/lib/kubelet/pods/c4c4ff25-5692-417b-bd4c-53fb2cbedba7/volumes" Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.270263 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d0e56c4e-da77-42ed-b415-fafbb5e465ca" path="/var/lib/kubelet/pods/d0e56c4e-da77-42ed-b415-fafbb5e465ca/volumes" Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.273829 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3571f1c-e23d-479d-aceb-d1b79d5b1de0" path="/var/lib/kubelet/pods/d3571f1c-e23d-479d-aceb-d1b79d5b1de0/volumes" Feb 27 17:25:54 crc kubenswrapper[4708]: I0227 17:25:54.275046 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76c9acf-0333-4355-9c57-46fd59f26866" path="/var/lib/kubelet/pods/f76c9acf-0333-4355-9c57-46fd59f26866/volumes" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.663391 4708 scope.go:117] "RemoveContainer" containerID="78747c6be1f230887aa936015160311d40ba3ed105c9f843c1a8d329e43c6b45" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.708599 4708 scope.go:117] "RemoveContainer" containerID="736083f55640d67dd3b6a8270f0a0a9078855d0ba9950b60d2cfe4dab09bce00" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.758863 4708 scope.go:117] "RemoveContainer" containerID="8a4256956260177c4867a538b012f9139f60f4ac02e084fdcb7655705c504d8e" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.824204 4708 scope.go:117] "RemoveContainer" containerID="ecb5c8e9128522465d626c7afdfb4930b001dc764c3c6b28fefb8bd22ab39fb2" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.852794 4708 scope.go:117] "RemoveContainer" containerID="0ec36a861e45a7d8f8ff82966674f72e22028da05ee4a768a4e48524fa376534" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.899372 4708 scope.go:117] "RemoveContainer" containerID="5cd9cd7696d0ddfb346b1881e68d3aab23ba2ccd6d611b06527401054074620f" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.942823 4708 scope.go:117] "RemoveContainer" containerID="f82bdd7cb361ac7b8a1d9c4ea602ba899e970fab3d9c02acfdb38f6ac5886210" Feb 27 17:25:58 crc kubenswrapper[4708]: I0227 17:25:58.996203 4708 scope.go:117] "RemoveContainer" containerID="a46c4b1cf298a17161bf28bf4756effcdb2d1c2d319f1bc4e47979a772377343" Feb 27 17:25:59 crc kubenswrapper[4708]: I0227 17:25:59.036378 4708 scope.go:117] "RemoveContainer" containerID="a4a9627741308a5ac9af33acda8a9ae894dd7c360d713237cf2e2733ed78cc23" Feb 27 17:25:59 crc kubenswrapper[4708]: I0227 17:25:59.066965 4708 scope.go:117] "RemoveContainer" containerID="c7b5df1574b323f13bae2164b67198681268e4e5cf4216396dab7a06607f9b6d" Feb 27 17:25:59 crc kubenswrapper[4708]: I0227 17:25:59.095109 4708 scope.go:117] "RemoveContainer" containerID="587e33d4a94b849a493d60e4cc751a09ce0312dc666379edc1841c72a80fd9af" Feb 27 17:25:59 crc kubenswrapper[4708]: I0227 17:25:59.137399 4708 scope.go:117] "RemoveContainer" containerID="3d85cd546ad48a469ad1ea6205c60fab34ec9a955e111ec6b140332b7354fb29" Feb 27 17:25:59 crc kubenswrapper[4708]: I0227 17:25:59.172176 4708 scope.go:117] "RemoveContainer" containerID="3e4a642d8dc4e9bb2356458753a6c853e934f8ee74d53cfd03cc2d8dc36c1877" Feb 27 17:25:59 crc kubenswrapper[4708]: I0227 17:25:59.196178 4708 scope.go:117] "RemoveContainer" containerID="4a03e50aed2737f235e797dc38c6490aa11ff4b1ff82b6435958d539f72864d4" Feb 27 17:25:59 crc kubenswrapper[4708]: I0227 17:25:59.221419 4708 scope.go:117] "RemoveContainer" containerID="44ac8eace8c1d1a0ca0423e4014961aed9d702ceef601443d3384a82e7e54dae" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.162708 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536886-xb2zq"] Feb 27 17:26:00 crc kubenswrapper[4708]: E0227 17:26:00.163395 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163407 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4708]: E0227 17:26:00.163424 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163429 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4708]: E0227 17:26:00.163443 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163449 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4708]: E0227 17:26:00.163478 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163484 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4708]: E0227 17:26:00.163491 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163496 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="extract-content" Feb 27 17:26:00 crc kubenswrapper[4708]: E0227 17:26:00.163513 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163520 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="extract-utilities" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163701 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3f957a1-f8b3-4b2f-b214-7fdb967562af" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.163721 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0523852-7b81-444b-b9b1-517a1ca2eaf7" containerName="registry-server" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.164490 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.166526 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.166794 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.166979 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.179463 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-xb2zq"] Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.257363 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8mlf\" (UniqueName: \"kubernetes.io/projected/88206c48-bc8c-4dc7-a05b-50814f0c7446-kube-api-access-g8mlf\") pod \"auto-csr-approver-29536886-xb2zq\" (UID: \"88206c48-bc8c-4dc7-a05b-50814f0c7446\") " pod="openshift-infra/auto-csr-approver-29536886-xb2zq" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.359407 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8mlf\" (UniqueName: \"kubernetes.io/projected/88206c48-bc8c-4dc7-a05b-50814f0c7446-kube-api-access-g8mlf\") pod \"auto-csr-approver-29536886-xb2zq\" (UID: \"88206c48-bc8c-4dc7-a05b-50814f0c7446\") " pod="openshift-infra/auto-csr-approver-29536886-xb2zq" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.389110 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8mlf\" (UniqueName: \"kubernetes.io/projected/88206c48-bc8c-4dc7-a05b-50814f0c7446-kube-api-access-g8mlf\") pod \"auto-csr-approver-29536886-xb2zq\" (UID: \"88206c48-bc8c-4dc7-a05b-50814f0c7446\") " pod="openshift-infra/auto-csr-approver-29536886-xb2zq" Feb 27 17:26:00 crc kubenswrapper[4708]: I0227 17:26:00.483148 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" Feb 27 17:26:01 crc kubenswrapper[4708]: I0227 17:26:01.011121 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-xb2zq"] Feb 27 17:26:01 crc kubenswrapper[4708]: I0227 17:26:01.221557 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" event={"ID":"88206c48-bc8c-4dc7-a05b-50814f0c7446","Type":"ContainerStarted","Data":"26f2eea1eda59099734c9a5a13c078a4c90d332f30a0da97492139646cb6e2a2"} Feb 27 17:26:03 crc kubenswrapper[4708]: I0227 17:26:03.259170 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" event={"ID":"88206c48-bc8c-4dc7-a05b-50814f0c7446","Type":"ContainerStarted","Data":"5c2b724fdf7187a0815bc2a9c0e344beb78098589310dd5f9a7774da98633049"} Feb 27 17:26:03 crc kubenswrapper[4708]: I0227 17:26:03.287120 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" podStartSLOduration=2.25477771 podStartE2EDuration="3.287091596s" podCreationTimestamp="2026-02-27 17:26:00 +0000 UTC" firstStartedPulling="2026-02-27 17:26:01.008841363 +0000 UTC m=+1959.524638970" lastFinishedPulling="2026-02-27 17:26:02.041155219 +0000 UTC m=+1960.556952856" observedRunningTime="2026-02-27 17:26:03.280820577 +0000 UTC m=+1961.796618204" watchObservedRunningTime="2026-02-27 17:26:03.287091596 +0000 UTC m=+1961.802889223" Feb 27 17:26:04 crc kubenswrapper[4708]: I0227 17:26:04.277985 4708 generic.go:334] "Generic (PLEG): container finished" podID="88206c48-bc8c-4dc7-a05b-50814f0c7446" containerID="5c2b724fdf7187a0815bc2a9c0e344beb78098589310dd5f9a7774da98633049" exitCode=0 Feb 27 17:26:04 crc kubenswrapper[4708]: I0227 17:26:04.278031 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" event={"ID":"88206c48-bc8c-4dc7-a05b-50814f0c7446","Type":"ContainerDied","Data":"5c2b724fdf7187a0815bc2a9c0e344beb78098589310dd5f9a7774da98633049"} Feb 27 17:26:05 crc kubenswrapper[4708]: I0227 17:26:05.795372 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" Feb 27 17:26:05 crc kubenswrapper[4708]: I0227 17:26:05.893938 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g8mlf\" (UniqueName: \"kubernetes.io/projected/88206c48-bc8c-4dc7-a05b-50814f0c7446-kube-api-access-g8mlf\") pod \"88206c48-bc8c-4dc7-a05b-50814f0c7446\" (UID: \"88206c48-bc8c-4dc7-a05b-50814f0c7446\") " Feb 27 17:26:05 crc kubenswrapper[4708]: I0227 17:26:05.929152 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88206c48-bc8c-4dc7-a05b-50814f0c7446-kube-api-access-g8mlf" (OuterVolumeSpecName: "kube-api-access-g8mlf") pod "88206c48-bc8c-4dc7-a05b-50814f0c7446" (UID: "88206c48-bc8c-4dc7-a05b-50814f0c7446"). InnerVolumeSpecName "kube-api-access-g8mlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:26:05 crc kubenswrapper[4708]: I0227 17:26:05.997460 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g8mlf\" (UniqueName: \"kubernetes.io/projected/88206c48-bc8c-4dc7-a05b-50814f0c7446-kube-api-access-g8mlf\") on node \"crc\" DevicePath \"\"" Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.035172 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-h7hx9"] Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.056597 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-h7hx9"] Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.243451 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="988145b2-7dc5-4a8e-8206-bf03ab36fb2a" path="/var/lib/kubelet/pods/988145b2-7dc5-4a8e-8206-bf03ab36fb2a/volumes" Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.307239 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" event={"ID":"88206c48-bc8c-4dc7-a05b-50814f0c7446","Type":"ContainerDied","Data":"26f2eea1eda59099734c9a5a13c078a4c90d332f30a0da97492139646cb6e2a2"} Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.307459 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536886-xb2zq" Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.307493 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26f2eea1eda59099734c9a5a13c078a4c90d332f30a0da97492139646cb6e2a2" Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.878450 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-2qjgw"] Feb 27 17:26:06 crc kubenswrapper[4708]: I0227 17:26:06.893805 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536880-2qjgw"] Feb 27 17:26:08 crc kubenswrapper[4708]: I0227 17:26:08.241688 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5579d0a9-c19d-4b34-9636-40eab7128bc4" path="/var/lib/kubelet/pods/5579d0a9-c19d-4b34-9636-40eab7128bc4/volumes" Feb 27 17:26:27 crc kubenswrapper[4708]: I0227 17:26:27.533654 4708 generic.go:334] "Generic (PLEG): container finished" podID="378dc842-8c5d-4882-ab1f-3f89e1ed250b" containerID="c0ce01b5158213ae4d30da5264c0567e0ebb76353a895dc5ee18fb6886449cc1" exitCode=0 Feb 27 17:26:27 crc kubenswrapper[4708]: I0227 17:26:27.533763 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" event={"ID":"378dc842-8c5d-4882-ab1f-3f89e1ed250b","Type":"ContainerDied","Data":"c0ce01b5158213ae4d30da5264c0567e0ebb76353a895dc5ee18fb6886449cc1"} Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.163139 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.262638 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-ssh-key-openstack-edpm-ipam\") pod \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.262731 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rqm5\" (UniqueName: \"kubernetes.io/projected/378dc842-8c5d-4882-ab1f-3f89e1ed250b-kube-api-access-2rqm5\") pod \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.262798 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-inventory\") pod \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\" (UID: \"378dc842-8c5d-4882-ab1f-3f89e1ed250b\") " Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.280184 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/378dc842-8c5d-4882-ab1f-3f89e1ed250b-kube-api-access-2rqm5" (OuterVolumeSpecName: "kube-api-access-2rqm5") pod "378dc842-8c5d-4882-ab1f-3f89e1ed250b" (UID: "378dc842-8c5d-4882-ab1f-3f89e1ed250b"). InnerVolumeSpecName "kube-api-access-2rqm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.332017 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-inventory" (OuterVolumeSpecName: "inventory") pod "378dc842-8c5d-4882-ab1f-3f89e1ed250b" (UID: "378dc842-8c5d-4882-ab1f-3f89e1ed250b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.335262 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "378dc842-8c5d-4882-ab1f-3f89e1ed250b" (UID: "378dc842-8c5d-4882-ab1f-3f89e1ed250b"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.364947 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.365197 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rqm5\" (UniqueName: \"kubernetes.io/projected/378dc842-8c5d-4882-ab1f-3f89e1ed250b-kube-api-access-2rqm5\") on node \"crc\" DevicePath \"\"" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.365266 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/378dc842-8c5d-4882-ab1f-3f89e1ed250b-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.555867 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" event={"ID":"378dc842-8c5d-4882-ab1f-3f89e1ed250b","Type":"ContainerDied","Data":"6e3e59ecc3cb0cfa0be40faae11744d23fd7820d95f289f8a1cd1dea77d0a89f"} Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.556149 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6e3e59ecc3cb0cfa0be40faae11744d23fd7820d95f289f8a1cd1dea77d0a89f" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.556010 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/download-cache-edpm-deployment-openstack-edpm-ipam-hclxw" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.640715 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd"] Feb 27 17:26:29 crc kubenswrapper[4708]: E0227 17:26:29.641197 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="378dc842-8c5d-4882-ab1f-3f89e1ed250b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.641215 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="378dc842-8c5d-4882-ab1f-3f89e1ed250b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 17:26:29 crc kubenswrapper[4708]: E0227 17:26:29.641228 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88206c48-bc8c-4dc7-a05b-50814f0c7446" containerName="oc" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.641235 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="88206c48-bc8c-4dc7-a05b-50814f0c7446" containerName="oc" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.641432 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="88206c48-bc8c-4dc7-a05b-50814f0c7446" containerName="oc" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.641445 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="378dc842-8c5d-4882-ab1f-3f89e1ed250b" containerName="download-cache-edpm-deployment-openstack-edpm-ipam" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.642135 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.643770 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.643775 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.644190 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.645133 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.657666 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd"] Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.779752 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb98w\" (UniqueName: \"kubernetes.io/projected/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-kube-api-access-kb98w\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.779833 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.780358 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.882778 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.882945 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kb98w\" (UniqueName: \"kubernetes.io/projected/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-kube-api-access-kb98w\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.883022 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.886895 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-ssh-key-openstack-edpm-ipam\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.890818 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.900479 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kb98w\" (UniqueName: \"kubernetes.io/projected/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-kube-api-access-kb98w\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:29 crc kubenswrapper[4708]: I0227 17:26:29.957668 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:26:30 crc kubenswrapper[4708]: I0227 17:26:30.592308 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd"] Feb 27 17:26:30 crc kubenswrapper[4708]: W0227 17:26:30.612892 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd57eeb05_c84e_45a1_8e3a_5c54cd498d30.slice/crio-3bddc51714b020fa26f4d651d0e20ade932906427af2e2eaf844a9fc29c3212b WatchSource:0}: Error finding container 3bddc51714b020fa26f4d651d0e20ade932906427af2e2eaf844a9fc29c3212b: Status 404 returned error can't find the container with id 3bddc51714b020fa26f4d651d0e20ade932906427af2e2eaf844a9fc29c3212b Feb 27 17:26:31 crc kubenswrapper[4708]: I0227 17:26:31.579612 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" event={"ID":"d57eeb05-c84e-45a1-8e3a-5c54cd498d30","Type":"ContainerStarted","Data":"2246b8570f33208e851a249b87216b3e023b38c23db6f7b07093cac21ebe3153"} Feb 27 17:26:31 crc kubenswrapper[4708]: I0227 17:26:31.579882 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" event={"ID":"d57eeb05-c84e-45a1-8e3a-5c54cd498d30","Type":"ContainerStarted","Data":"3bddc51714b020fa26f4d651d0e20ade932906427af2e2eaf844a9fc29c3212b"} Feb 27 17:26:31 crc kubenswrapper[4708]: I0227 17:26:31.612083 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" podStartSLOduration=2.106947427 podStartE2EDuration="2.612054916s" podCreationTimestamp="2026-02-27 17:26:29 +0000 UTC" firstStartedPulling="2026-02-27 17:26:30.619201757 +0000 UTC m=+1989.134999364" lastFinishedPulling="2026-02-27 17:26:31.124309226 +0000 UTC m=+1989.640106853" observedRunningTime="2026-02-27 17:26:31.600907194 +0000 UTC m=+1990.116704801" watchObservedRunningTime="2026-02-27 17:26:31.612054916 +0000 UTC m=+1990.127852533" Feb 27 17:26:39 crc kubenswrapper[4708]: I0227 17:26:39.060291 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-ql5zj"] Feb 27 17:26:39 crc kubenswrapper[4708]: I0227 17:26:39.077946 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-ql5zj"] Feb 27 17:26:40 crc kubenswrapper[4708]: I0227 17:26:40.248229 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aee9dccb-4475-404d-b169-496cc3ae6a2b" path="/var/lib/kubelet/pods/aee9dccb-4475-404d-b169-496cc3ae6a2b/volumes" Feb 27 17:26:42 crc kubenswrapper[4708]: I0227 17:26:42.033082 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-jbvsr"] Feb 27 17:26:42 crc kubenswrapper[4708]: I0227 17:26:42.046151 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-jbvsr"] Feb 27 17:26:42 crc kubenswrapper[4708]: I0227 17:26:42.248672 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3f22956-f17c-4339-b166-a3c29355b5d2" path="/var/lib/kubelet/pods/c3f22956-f17c-4339-b166-a3c29355b5d2/volumes" Feb 27 17:26:59 crc kubenswrapper[4708]: I0227 17:26:59.681101 4708 scope.go:117] "RemoveContainer" containerID="8a678cc75fb3233a08743bbb4def5bb1881eb46b274706e2010bbb929a737f30" Feb 27 17:26:59 crc kubenswrapper[4708]: I0227 17:26:59.721055 4708 scope.go:117] "RemoveContainer" containerID="21e6f4b20cfaf2460b5f143c973f2b856e812b7c767c23c3880a0d5d167333ef" Feb 27 17:26:59 crc kubenswrapper[4708]: I0227 17:26:59.759933 4708 scope.go:117] "RemoveContainer" containerID="c339fa7ad4e83fb5976f1214326e916f36bef9cdd1e1eed1a9417d4a57ce5f39" Feb 27 17:26:59 crc kubenswrapper[4708]: I0227 17:26:59.798375 4708 scope.go:117] "RemoveContainer" containerID="a7e5563519d075cc03eeb2eafcb7bf8dd8bb05152315ecb7a3b88557da4e5208" Feb 27 17:27:04 crc kubenswrapper[4708]: I0227 17:27:04.088627 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-jgfws"] Feb 27 17:27:04 crc kubenswrapper[4708]: I0227 17:27:04.104163 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-lzgr7"] Feb 27 17:27:04 crc kubenswrapper[4708]: I0227 17:27:04.114535 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-lzgr7"] Feb 27 17:27:04 crc kubenswrapper[4708]: I0227 17:27:04.122819 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-jgfws"] Feb 27 17:27:04 crc kubenswrapper[4708]: I0227 17:27:04.261605 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="108a278a-0da6-4e63-be97-cea8279e7c99" path="/var/lib/kubelet/pods/108a278a-0da6-4e63-be97-cea8279e7c99/volumes" Feb 27 17:27:04 crc kubenswrapper[4708]: I0227 17:27:04.263576 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a" path="/var/lib/kubelet/pods/9c9bf240-b4c6-46f9-8d63-7b1a7f29ab8a/volumes" Feb 27 17:27:13 crc kubenswrapper[4708]: I0227 17:27:13.036524 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-ggwzp"] Feb 27 17:27:13 crc kubenswrapper[4708]: I0227 17:27:13.046042 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-ggwzp"] Feb 27 17:27:14 crc kubenswrapper[4708]: I0227 17:27:14.248567 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd272ccd-a2cc-433f-80bf-96134126ce6b" path="/var/lib/kubelet/pods/dd272ccd-a2cc-433f-80bf-96134126ce6b/volumes" Feb 27 17:27:17 crc kubenswrapper[4708]: I0227 17:27:17.057774 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-s4ckm"] Feb 27 17:27:17 crc kubenswrapper[4708]: I0227 17:27:17.070617 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-s4ckm"] Feb 27 17:27:18 crc kubenswrapper[4708]: I0227 17:27:18.263684 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f4cfb1-705b-40bb-b7aa-d722d1ec00c5" path="/var/lib/kubelet/pods/57f4cfb1-705b-40bb-b7aa-d722d1ec00c5/volumes" Feb 27 17:27:35 crc kubenswrapper[4708]: I0227 17:27:35.634732 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:27:35 crc kubenswrapper[4708]: I0227 17:27:35.635420 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:27:43 crc kubenswrapper[4708]: I0227 17:27:43.515635 4708 generic.go:334] "Generic (PLEG): container finished" podID="d57eeb05-c84e-45a1-8e3a-5c54cd498d30" containerID="2246b8570f33208e851a249b87216b3e023b38c23db6f7b07093cac21ebe3153" exitCode=0 Feb 27 17:27:43 crc kubenswrapper[4708]: I0227 17:27:43.515721 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" event={"ID":"d57eeb05-c84e-45a1-8e3a-5c54cd498d30","Type":"ContainerDied","Data":"2246b8570f33208e851a249b87216b3e023b38c23db6f7b07093cac21ebe3153"} Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.054293 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.126179 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kb98w\" (UniqueName: \"kubernetes.io/projected/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-kube-api-access-kb98w\") pod \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.126340 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-inventory\") pod \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.126402 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-ssh-key-openstack-edpm-ipam\") pod \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\" (UID: \"d57eeb05-c84e-45a1-8e3a-5c54cd498d30\") " Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.132572 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-kube-api-access-kb98w" (OuterVolumeSpecName: "kube-api-access-kb98w") pod "d57eeb05-c84e-45a1-8e3a-5c54cd498d30" (UID: "d57eeb05-c84e-45a1-8e3a-5c54cd498d30"). InnerVolumeSpecName "kube-api-access-kb98w". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.156242 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-inventory" (OuterVolumeSpecName: "inventory") pod "d57eeb05-c84e-45a1-8e3a-5c54cd498d30" (UID: "d57eeb05-c84e-45a1-8e3a-5c54cd498d30"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.156741 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d57eeb05-c84e-45a1-8e3a-5c54cd498d30" (UID: "d57eeb05-c84e-45a1-8e3a-5c54cd498d30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.229470 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kb98w\" (UniqueName: \"kubernetes.io/projected/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-kube-api-access-kb98w\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.230219 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.230905 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d57eeb05-c84e-45a1-8e3a-5c54cd498d30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.539955 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" event={"ID":"d57eeb05-c84e-45a1-8e3a-5c54cd498d30","Type":"ContainerDied","Data":"3bddc51714b020fa26f4d651d0e20ade932906427af2e2eaf844a9fc29c3212b"} Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.540327 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3bddc51714b020fa26f4d651d0e20ade932906427af2e2eaf844a9fc29c3212b" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.540044 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.691763 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99"] Feb 27 17:27:45 crc kubenswrapper[4708]: E0227 17:27:45.692370 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d57eeb05-c84e-45a1-8e3a-5c54cd498d30" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.692392 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d57eeb05-c84e-45a1-8e3a-5c54cd498d30" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.692570 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d57eeb05-c84e-45a1-8e3a-5c54cd498d30" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.693268 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.698608 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.698629 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.699089 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.699242 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.707498 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99"] Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.746344 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.746755 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.746998 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9wgr\" (UniqueName: \"kubernetes.io/projected/3c1995e2-730c-4f54-a505-cd3794371a7a-kube-api-access-j9wgr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.848689 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.849018 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.849092 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9wgr\" (UniqueName: \"kubernetes.io/projected/3c1995e2-730c-4f54-a505-cd3794371a7a-kube-api-access-j9wgr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.854802 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.855276 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-ssh-key-openstack-edpm-ipam\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:45 crc kubenswrapper[4708]: I0227 17:27:45.870614 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9wgr\" (UniqueName: \"kubernetes.io/projected/3c1995e2-730c-4f54-a505-cd3794371a7a-kube-api-access-j9wgr\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-lbw99\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:46 crc kubenswrapper[4708]: I0227 17:27:46.015558 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:46 crc kubenswrapper[4708]: I0227 17:27:46.733630 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99"] Feb 27 17:27:47 crc kubenswrapper[4708]: I0227 17:27:47.584695 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" event={"ID":"3c1995e2-730c-4f54-a505-cd3794371a7a","Type":"ContainerStarted","Data":"6d7034a551d0b112bd42a20c5a17ca5381b30df801736025f63a33d8baedda73"} Feb 27 17:27:47 crc kubenswrapper[4708]: I0227 17:27:47.585348 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" event={"ID":"3c1995e2-730c-4f54-a505-cd3794371a7a","Type":"ContainerStarted","Data":"dcafc56a6c09cb5660371d9397ab3b59e34aebc6fcfd6cbadaefc4f60d1dd6b4"} Feb 27 17:27:47 crc kubenswrapper[4708]: I0227 17:27:47.622488 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" podStartSLOduration=2.1238120719999998 podStartE2EDuration="2.622465227s" podCreationTimestamp="2026-02-27 17:27:45 +0000 UTC" firstStartedPulling="2026-02-27 17:27:46.785263175 +0000 UTC m=+2065.301060762" lastFinishedPulling="2026-02-27 17:27:47.2839163 +0000 UTC m=+2065.799713917" observedRunningTime="2026-02-27 17:27:47.616515656 +0000 UTC m=+2066.132313283" watchObservedRunningTime="2026-02-27 17:27:47.622465227 +0000 UTC m=+2066.138262844" Feb 27 17:27:52 crc kubenswrapper[4708]: I0227 17:27:52.649915 4708 generic.go:334] "Generic (PLEG): container finished" podID="3c1995e2-730c-4f54-a505-cd3794371a7a" containerID="6d7034a551d0b112bd42a20c5a17ca5381b30df801736025f63a33d8baedda73" exitCode=0 Feb 27 17:27:52 crc kubenswrapper[4708]: I0227 17:27:52.650067 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" event={"ID":"3c1995e2-730c-4f54-a505-cd3794371a7a","Type":"ContainerDied","Data":"6d7034a551d0b112bd42a20c5a17ca5381b30df801736025f63a33d8baedda73"} Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.148199 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.274906 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9wgr\" (UniqueName: \"kubernetes.io/projected/3c1995e2-730c-4f54-a505-cd3794371a7a-kube-api-access-j9wgr\") pod \"3c1995e2-730c-4f54-a505-cd3794371a7a\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.275203 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-ssh-key-openstack-edpm-ipam\") pod \"3c1995e2-730c-4f54-a505-cd3794371a7a\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.275264 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-inventory\") pod \"3c1995e2-730c-4f54-a505-cd3794371a7a\" (UID: \"3c1995e2-730c-4f54-a505-cd3794371a7a\") " Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.319093 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c1995e2-730c-4f54-a505-cd3794371a7a-kube-api-access-j9wgr" (OuterVolumeSpecName: "kube-api-access-j9wgr") pod "3c1995e2-730c-4f54-a505-cd3794371a7a" (UID: "3c1995e2-730c-4f54-a505-cd3794371a7a"). InnerVolumeSpecName "kube-api-access-j9wgr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.384065 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "3c1995e2-730c-4f54-a505-cd3794371a7a" (UID: "3c1995e2-730c-4f54-a505-cd3794371a7a"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.384116 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-inventory" (OuterVolumeSpecName: "inventory") pod "3c1995e2-730c-4f54-a505-cd3794371a7a" (UID: "3c1995e2-730c-4f54-a505-cd3794371a7a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.389025 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.389097 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c1995e2-730c-4f54-a505-cd3794371a7a-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.389114 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9wgr\" (UniqueName: \"kubernetes.io/projected/3c1995e2-730c-4f54-a505-cd3794371a7a-kube-api-access-j9wgr\") on node \"crc\" DevicePath \"\"" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.673729 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" event={"ID":"3c1995e2-730c-4f54-a505-cd3794371a7a","Type":"ContainerDied","Data":"dcafc56a6c09cb5660371d9397ab3b59e34aebc6fcfd6cbadaefc4f60d1dd6b4"} Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.673782 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-lbw99" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.673811 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcafc56a6c09cb5660371d9397ab3b59e34aebc6fcfd6cbadaefc4f60d1dd6b4" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.796511 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j"] Feb 27 17:27:54 crc kubenswrapper[4708]: E0227 17:27:54.797035 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c1995e2-730c-4f54-a505-cd3794371a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.797053 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c1995e2-730c-4f54-a505-cd3794371a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.797243 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c1995e2-730c-4f54-a505-cd3794371a7a" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.798058 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.806113 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.807055 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.807197 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.807787 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.814820 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j"] Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.902023 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kndt\" (UniqueName: \"kubernetes.io/projected/4fdb3925-ad04-4a50-82e2-2f2362945df4-kube-api-access-4kndt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.902106 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:54 crc kubenswrapper[4708]: I0227 17:27:54.902214 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.004827 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4kndt\" (UniqueName: \"kubernetes.io/projected/4fdb3925-ad04-4a50-82e2-2f2362945df4-kube-api-access-4kndt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.004925 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.005057 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.010387 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-ssh-key-openstack-edpm-ipam\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.011443 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.028231 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4kndt\" (UniqueName: \"kubernetes.io/projected/4fdb3925-ad04-4a50-82e2-2f2362945df4-kube-api-access-4kndt\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-w625j\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.131265 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:27:55 crc kubenswrapper[4708]: I0227 17:27:55.764043 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j"] Feb 27 17:27:56 crc kubenswrapper[4708]: I0227 17:27:56.700185 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" event={"ID":"4fdb3925-ad04-4a50-82e2-2f2362945df4","Type":"ContainerStarted","Data":"d456e17f5cc1e494b3461c4b3bde6d877fa08424a16f49af2602cb3a53a36edf"} Feb 27 17:27:57 crc kubenswrapper[4708]: I0227 17:27:57.716240 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" event={"ID":"4fdb3925-ad04-4a50-82e2-2f2362945df4","Type":"ContainerStarted","Data":"4bcd9760f149a02e346d64da45e59cc3b2be81eb98479c0255ec3518a8a0c5f5"} Feb 27 17:27:57 crc kubenswrapper[4708]: I0227 17:27:57.753886 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" podStartSLOduration=2.808208403 podStartE2EDuration="3.753864696s" podCreationTimestamp="2026-02-27 17:27:54 +0000 UTC" firstStartedPulling="2026-02-27 17:27:55.749901805 +0000 UTC m=+2074.265699432" lastFinishedPulling="2026-02-27 17:27:56.695558138 +0000 UTC m=+2075.211355725" observedRunningTime="2026-02-27 17:27:57.736700742 +0000 UTC m=+2076.252498369" watchObservedRunningTime="2026-02-27 17:27:57.753864696 +0000 UTC m=+2076.269662293" Feb 27 17:27:59 crc kubenswrapper[4708]: I0227 17:27:59.943694 4708 scope.go:117] "RemoveContainer" containerID="5942bf776f7d55da9c41b01d332b9a021c1833367facdd5dd4e3040e1cc4047d" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.009408 4708 scope.go:117] "RemoveContainer" containerID="50ac60033f97c37889971727e8a28f27504bd0050cd60d78aaf4f010b9c23ef4" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.047574 4708 scope.go:117] "RemoveContainer" containerID="cd7d77a1074bf8e22de44a3980b43c0f070d4ec56a36e904dfa86ad25063becc" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.069067 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-43f3-account-create-update-92rv7"] Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.081150 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-43f3-account-create-update-92rv7"] Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.153914 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536888-7hhxj"] Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.155637 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-7hhxj" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.158741 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.161474 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.163046 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.165214 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-7hhxj"] Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.180668 4708 scope.go:117] "RemoveContainer" containerID="ae9b64a7309db4fedfe9919e36d91908e6101b9c6814fb46d8e7a3371b045372" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.218895 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nkff4\" (UniqueName: \"kubernetes.io/projected/db488d12-6b42-4eda-8827-2c5c174a4e60-kube-api-access-nkff4\") pod \"auto-csr-approver-29536888-7hhxj\" (UID: \"db488d12-6b42-4eda-8827-2c5c174a4e60\") " pod="openshift-infra/auto-csr-approver-29536888-7hhxj" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.240101 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d9181e3-1fa3-4039-ba55-0462c9243351" path="/var/lib/kubelet/pods/4d9181e3-1fa3-4039-ba55-0462c9243351/volumes" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.321054 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nkff4\" (UniqueName: \"kubernetes.io/projected/db488d12-6b42-4eda-8827-2c5c174a4e60-kube-api-access-nkff4\") pod \"auto-csr-approver-29536888-7hhxj\" (UID: \"db488d12-6b42-4eda-8827-2c5c174a4e60\") " pod="openshift-infra/auto-csr-approver-29536888-7hhxj" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.347186 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nkff4\" (UniqueName: \"kubernetes.io/projected/db488d12-6b42-4eda-8827-2c5c174a4e60-kube-api-access-nkff4\") pod \"auto-csr-approver-29536888-7hhxj\" (UID: \"db488d12-6b42-4eda-8827-2c5c174a4e60\") " pod="openshift-infra/auto-csr-approver-29536888-7hhxj" Feb 27 17:28:00 crc kubenswrapper[4708]: I0227 17:28:00.488254 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-7hhxj" Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.002097 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-7hhxj"] Feb 27 17:28:01 crc kubenswrapper[4708]: W0227 17:28:01.011840 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb488d12_6b42_4eda_8827_2c5c174a4e60.slice/crio-913a68a0bdc71e5de94a387c3bc5eed6475bf4c2498752be7c624d6666cadae0 WatchSource:0}: Error finding container 913a68a0bdc71e5de94a387c3bc5eed6475bf4c2498752be7c624d6666cadae0: Status 404 returned error can't find the container with id 913a68a0bdc71e5de94a387c3bc5eed6475bf4c2498752be7c624d6666cadae0 Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.042707 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-xrqrb"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.059901 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-ds9xz"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.070237 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-ds9xz"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.081526 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-xrqrb"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.093461 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-d201-account-create-update-gjhmj"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.101649 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-lnkws"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.109134 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8912-account-create-update-8crv5"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.117842 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8912-account-create-update-8crv5"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.125907 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-d201-account-create-update-gjhmj"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.134388 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-lnkws"] Feb 27 17:28:01 crc kubenswrapper[4708]: I0227 17:28:01.801628 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536888-7hhxj" event={"ID":"db488d12-6b42-4eda-8827-2c5c174a4e60","Type":"ContainerStarted","Data":"913a68a0bdc71e5de94a387c3bc5eed6475bf4c2498752be7c624d6666cadae0"} Feb 27 17:28:02 crc kubenswrapper[4708]: I0227 17:28:02.245646 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="018ebe44-d885-4630-be79-a1dd5dbc46ae" path="/var/lib/kubelet/pods/018ebe44-d885-4630-be79-a1dd5dbc46ae/volumes" Feb 27 17:28:02 crc kubenswrapper[4708]: I0227 17:28:02.247250 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6461de7d-1631-4115-becf-c90470540a61" path="/var/lib/kubelet/pods/6461de7d-1631-4115-becf-c90470540a61/volumes" Feb 27 17:28:02 crc kubenswrapper[4708]: I0227 17:28:02.251021 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7b144171-78f3-46fd-ad40-aafb289868d5" path="/var/lib/kubelet/pods/7b144171-78f3-46fd-ad40-aafb289868d5/volumes" Feb 27 17:28:02 crc kubenswrapper[4708]: I0227 17:28:02.252218 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="babefe61-6400-45bd-9c1a-2a20c9e0745b" path="/var/lib/kubelet/pods/babefe61-6400-45bd-9c1a-2a20c9e0745b/volumes" Feb 27 17:28:02 crc kubenswrapper[4708]: I0227 17:28:02.253789 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb8e6804-81dd-4862-af76-3015e030b84d" path="/var/lib/kubelet/pods/eb8e6804-81dd-4862-af76-3015e030b84d/volumes" Feb 27 17:28:02 crc kubenswrapper[4708]: I0227 17:28:02.812936 4708 generic.go:334] "Generic (PLEG): container finished" podID="db488d12-6b42-4eda-8827-2c5c174a4e60" containerID="d252a3f599a71f7772121888e0156ca9310d097f16b6451b544a93e5da1bf35d" exitCode=0 Feb 27 17:28:02 crc kubenswrapper[4708]: I0227 17:28:02.813038 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536888-7hhxj" event={"ID":"db488d12-6b42-4eda-8827-2c5c174a4e60","Type":"ContainerDied","Data":"d252a3f599a71f7772121888e0156ca9310d097f16b6451b544a93e5da1bf35d"} Feb 27 17:28:04 crc kubenswrapper[4708]: I0227 17:28:04.292643 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-7hhxj" Feb 27 17:28:04 crc kubenswrapper[4708]: I0227 17:28:04.413624 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nkff4\" (UniqueName: \"kubernetes.io/projected/db488d12-6b42-4eda-8827-2c5c174a4e60-kube-api-access-nkff4\") pod \"db488d12-6b42-4eda-8827-2c5c174a4e60\" (UID: \"db488d12-6b42-4eda-8827-2c5c174a4e60\") " Feb 27 17:28:04 crc kubenswrapper[4708]: I0227 17:28:04.419091 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db488d12-6b42-4eda-8827-2c5c174a4e60-kube-api-access-nkff4" (OuterVolumeSpecName: "kube-api-access-nkff4") pod "db488d12-6b42-4eda-8827-2c5c174a4e60" (UID: "db488d12-6b42-4eda-8827-2c5c174a4e60"). InnerVolumeSpecName "kube-api-access-nkff4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:28:04 crc kubenswrapper[4708]: I0227 17:28:04.516466 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nkff4\" (UniqueName: \"kubernetes.io/projected/db488d12-6b42-4eda-8827-2c5c174a4e60-kube-api-access-nkff4\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:04 crc kubenswrapper[4708]: I0227 17:28:04.837905 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536888-7hhxj" event={"ID":"db488d12-6b42-4eda-8827-2c5c174a4e60","Type":"ContainerDied","Data":"913a68a0bdc71e5de94a387c3bc5eed6475bf4c2498752be7c624d6666cadae0"} Feb 27 17:28:04 crc kubenswrapper[4708]: I0227 17:28:04.838272 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="913a68a0bdc71e5de94a387c3bc5eed6475bf4c2498752be7c624d6666cadae0" Feb 27 17:28:04 crc kubenswrapper[4708]: I0227 17:28:04.837993 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536888-7hhxj" Feb 27 17:28:05 crc kubenswrapper[4708]: I0227 17:28:05.372591 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-ks67v"] Feb 27 17:28:05 crc kubenswrapper[4708]: I0227 17:28:05.392081 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536882-ks67v"] Feb 27 17:28:05 crc kubenswrapper[4708]: I0227 17:28:05.631372 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:28:05 crc kubenswrapper[4708]: I0227 17:28:05.631423 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:28:06 crc kubenswrapper[4708]: I0227 17:28:06.245048 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c485e154-dd1d-463f-8ea0-3ccd02262055" path="/var/lib/kubelet/pods/c485e154-dd1d-463f-8ea0-3ccd02262055/volumes" Feb 27 17:28:29 crc kubenswrapper[4708]: I0227 17:28:29.071440 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-89gd4"] Feb 27 17:28:29 crc kubenswrapper[4708]: I0227 17:28:29.087174 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-89gd4"] Feb 27 17:28:30 crc kubenswrapper[4708]: I0227 17:28:30.240931 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b9d6fda-ab96-4cea-8fec-2c49b206d095" path="/var/lib/kubelet/pods/8b9d6fda-ab96-4cea-8fec-2c49b206d095/volumes" Feb 27 17:28:35 crc kubenswrapper[4708]: I0227 17:28:35.631983 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:28:35 crc kubenswrapper[4708]: I0227 17:28:35.632517 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:28:35 crc kubenswrapper[4708]: I0227 17:28:35.632563 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:28:35 crc kubenswrapper[4708]: I0227 17:28:35.633500 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fc64fcd853be9a08f141cf8d2540773fd0f62639171cb2f54c41087f21e9f447"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:28:35 crc kubenswrapper[4708]: I0227 17:28:35.633571 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://fc64fcd853be9a08f141cf8d2540773fd0f62639171cb2f54c41087f21e9f447" gracePeriod=600 Feb 27 17:28:36 crc kubenswrapper[4708]: I0227 17:28:36.210074 4708 generic.go:334] "Generic (PLEG): container finished" podID="4fdb3925-ad04-4a50-82e2-2f2362945df4" containerID="4bcd9760f149a02e346d64da45e59cc3b2be81eb98479c0255ec3518a8a0c5f5" exitCode=0 Feb 27 17:28:36 crc kubenswrapper[4708]: I0227 17:28:36.210223 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" event={"ID":"4fdb3925-ad04-4a50-82e2-2f2362945df4","Type":"ContainerDied","Data":"4bcd9760f149a02e346d64da45e59cc3b2be81eb98479c0255ec3518a8a0c5f5"} Feb 27 17:28:36 crc kubenswrapper[4708]: I0227 17:28:36.223098 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="fc64fcd853be9a08f141cf8d2540773fd0f62639171cb2f54c41087f21e9f447" exitCode=0 Feb 27 17:28:36 crc kubenswrapper[4708]: I0227 17:28:36.223213 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"fc64fcd853be9a08f141cf8d2540773fd0f62639171cb2f54c41087f21e9f447"} Feb 27 17:28:36 crc kubenswrapper[4708]: I0227 17:28:36.223271 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879"} Feb 27 17:28:36 crc kubenswrapper[4708]: I0227 17:28:36.223311 4708 scope.go:117] "RemoveContainer" containerID="9e145a1814b42abd6a227de139184bc2e7f8f8adbe1555c6785c08e885cbae6e" Feb 27 17:28:37 crc kubenswrapper[4708]: I0227 17:28:37.885241 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.046945 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-inventory\") pod \"4fdb3925-ad04-4a50-82e2-2f2362945df4\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.047075 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-ssh-key-openstack-edpm-ipam\") pod \"4fdb3925-ad04-4a50-82e2-2f2362945df4\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.047114 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4kndt\" (UniqueName: \"kubernetes.io/projected/4fdb3925-ad04-4a50-82e2-2f2362945df4-kube-api-access-4kndt\") pod \"4fdb3925-ad04-4a50-82e2-2f2362945df4\" (UID: \"4fdb3925-ad04-4a50-82e2-2f2362945df4\") " Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.058178 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fdb3925-ad04-4a50-82e2-2f2362945df4-kube-api-access-4kndt" (OuterVolumeSpecName: "kube-api-access-4kndt") pod "4fdb3925-ad04-4a50-82e2-2f2362945df4" (UID: "4fdb3925-ad04-4a50-82e2-2f2362945df4"). InnerVolumeSpecName "kube-api-access-4kndt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.084675 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4fdb3925-ad04-4a50-82e2-2f2362945df4" (UID: "4fdb3925-ad04-4a50-82e2-2f2362945df4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.085075 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-inventory" (OuterVolumeSpecName: "inventory") pod "4fdb3925-ad04-4a50-82e2-2f2362945df4" (UID: "4fdb3925-ad04-4a50-82e2-2f2362945df4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.149574 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.149772 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4kndt\" (UniqueName: \"kubernetes.io/projected/4fdb3925-ad04-4a50-82e2-2f2362945df4-kube-api-access-4kndt\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.149861 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4fdb3925-ad04-4a50-82e2-2f2362945df4-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.256126 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" event={"ID":"4fdb3925-ad04-4a50-82e2-2f2362945df4","Type":"ContainerDied","Data":"d456e17f5cc1e494b3461c4b3bde6d877fa08424a16f49af2602cb3a53a36edf"} Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.256178 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d456e17f5cc1e494b3461c4b3bde6d877fa08424a16f49af2602cb3a53a36edf" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.256202 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-w625j" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.375572 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22"] Feb 27 17:28:38 crc kubenswrapper[4708]: E0227 17:28:38.376150 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db488d12-6b42-4eda-8827-2c5c174a4e60" containerName="oc" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.376175 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="db488d12-6b42-4eda-8827-2c5c174a4e60" containerName="oc" Feb 27 17:28:38 crc kubenswrapper[4708]: E0227 17:28:38.376200 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fdb3925-ad04-4a50-82e2-2f2362945df4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.376211 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fdb3925-ad04-4a50-82e2-2f2362945df4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.376457 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="db488d12-6b42-4eda-8827-2c5c174a4e60" containerName="oc" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.376482 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fdb3925-ad04-4a50-82e2-2f2362945df4" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.377546 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.380559 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.381679 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.382196 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.383455 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.388441 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22"] Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.559138 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.559526 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lqbr\" (UniqueName: \"kubernetes.io/projected/41b80060-486e-4ab2-872a-cfbbdf39b405-kube-api-access-4lqbr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.559738 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.661460 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lqbr\" (UniqueName: \"kubernetes.io/projected/41b80060-486e-4ab2-872a-cfbbdf39b405-kube-api-access-4lqbr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.661563 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.661610 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.667780 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.669911 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-ssh-key-openstack-edpm-ipam\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.692346 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lqbr\" (UniqueName: \"kubernetes.io/projected/41b80060-486e-4ab2-872a-cfbbdf39b405-kube-api-access-4lqbr\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-8zf22\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:38 crc kubenswrapper[4708]: I0227 17:28:38.722819 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:28:39 crc kubenswrapper[4708]: I0227 17:28:39.330237 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22"] Feb 27 17:28:40 crc kubenswrapper[4708]: I0227 17:28:40.280035 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" event={"ID":"41b80060-486e-4ab2-872a-cfbbdf39b405","Type":"ContainerStarted","Data":"cc2a9250f7709ee357e05225cf662b63dea74a001365dac22fde57c6e8f4d5cb"} Feb 27 17:28:40 crc kubenswrapper[4708]: I0227 17:28:40.284446 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" event={"ID":"41b80060-486e-4ab2-872a-cfbbdf39b405","Type":"ContainerStarted","Data":"e7fa95eee745ced0f94b20be8dd0f08e5134a0a47f9fc1df3f2c03734a1a0f7b"} Feb 27 17:28:40 crc kubenswrapper[4708]: I0227 17:28:40.312659 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" podStartSLOduration=1.869878235 podStartE2EDuration="2.312625543s" podCreationTimestamp="2026-02-27 17:28:38 +0000 UTC" firstStartedPulling="2026-02-27 17:28:39.33150738 +0000 UTC m=+2117.847304967" lastFinishedPulling="2026-02-27 17:28:39.774254658 +0000 UTC m=+2118.290052275" observedRunningTime="2026-02-27 17:28:40.309530626 +0000 UTC m=+2118.825328213" watchObservedRunningTime="2026-02-27 17:28:40.312625543 +0000 UTC m=+2118.828423170" Feb 27 17:28:51 crc kubenswrapper[4708]: I0227 17:28:51.069162 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6m5b"] Feb 27 17:28:51 crc kubenswrapper[4708]: I0227 17:28:51.077968 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-g6m5b"] Feb 27 17:28:52 crc kubenswrapper[4708]: I0227 17:28:52.238926 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c184c80c-f3fb-47ff-a8b7-46632aa678f4" path="/var/lib/kubelet/pods/c184c80c-f3fb-47ff-a8b7-46632aa678f4/volumes" Feb 27 17:28:53 crc kubenswrapper[4708]: I0227 17:28:53.046617 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zslk8"] Feb 27 17:28:53 crc kubenswrapper[4708]: I0227 17:28:53.099726 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-zslk8"] Feb 27 17:28:54 crc kubenswrapper[4708]: I0227 17:28:54.247141 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a78712b6-2f4f-4d79-a561-f30af5ee5733" path="/var/lib/kubelet/pods/a78712b6-2f4f-4d79-a561-f30af5ee5733/volumes" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.287619 4708 scope.go:117] "RemoveContainer" containerID="a559c9472453fc7aebf081709d40ae47f7ec3655d6e88c63ecffa1c9ef143cb8" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.327506 4708 scope.go:117] "RemoveContainer" containerID="cd5a4674d10a1a17cd90054ba1516ae342102cea189aafe41b687f1999821448" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.406640 4708 scope.go:117] "RemoveContainer" containerID="ff80e8a3a3c8d6e369f2546d8302fd6601c6070bbf644506bd5132b037ec16fe" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.452567 4708 scope.go:117] "RemoveContainer" containerID="00d1f54549468c77ba53bb980dc32ce0d08e537e3eee0e33d2d6a60ea8cb3067" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.490052 4708 scope.go:117] "RemoveContainer" containerID="a1154e8a71e56f614329082eb40d25bb529e42fb4f0e005e812c27fb899b4386" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.531532 4708 scope.go:117] "RemoveContainer" containerID="1bb6404ed1725ca80a52e5e7a01e4d3fc71aacd4aab6792a4bc0dca4ce5bf496" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.589025 4708 scope.go:117] "RemoveContainer" containerID="484af3da40001ba3c31e8ab1ac6f6bf369cd6dd878bce80437638baca89aa3ac" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.610712 4708 scope.go:117] "RemoveContainer" containerID="9869dc5390602739a9c7dad244f702f2d0930a201d13824b1426224bbea26287" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.652672 4708 scope.go:117] "RemoveContainer" containerID="ba68da5c684eed94fd18a76506eae572d8320c027a9fcd8a7d2a7df216c4b28a" Feb 27 17:29:00 crc kubenswrapper[4708]: I0227 17:29:00.690936 4708 scope.go:117] "RemoveContainer" containerID="b5cba25021303dbecb07161c3d8f8ddae573496c8a97fd7d8b635c839a5d6ae8" Feb 27 17:29:32 crc kubenswrapper[4708]: I0227 17:29:32.949105 4708 generic.go:334] "Generic (PLEG): container finished" podID="41b80060-486e-4ab2-872a-cfbbdf39b405" containerID="cc2a9250f7709ee357e05225cf662b63dea74a001365dac22fde57c6e8f4d5cb" exitCode=0 Feb 27 17:29:32 crc kubenswrapper[4708]: I0227 17:29:32.949213 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" event={"ID":"41b80060-486e-4ab2-872a-cfbbdf39b405","Type":"ContainerDied","Data":"cc2a9250f7709ee357e05225cf662b63dea74a001365dac22fde57c6e8f4d5cb"} Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.542322 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.695923 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4lqbr\" (UniqueName: \"kubernetes.io/projected/41b80060-486e-4ab2-872a-cfbbdf39b405-kube-api-access-4lqbr\") pod \"41b80060-486e-4ab2-872a-cfbbdf39b405\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.695996 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-ssh-key-openstack-edpm-ipam\") pod \"41b80060-486e-4ab2-872a-cfbbdf39b405\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.696034 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-inventory\") pod \"41b80060-486e-4ab2-872a-cfbbdf39b405\" (UID: \"41b80060-486e-4ab2-872a-cfbbdf39b405\") " Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.702616 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b80060-486e-4ab2-872a-cfbbdf39b405-kube-api-access-4lqbr" (OuterVolumeSpecName: "kube-api-access-4lqbr") pod "41b80060-486e-4ab2-872a-cfbbdf39b405" (UID: "41b80060-486e-4ab2-872a-cfbbdf39b405"). InnerVolumeSpecName "kube-api-access-4lqbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.726389 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-inventory" (OuterVolumeSpecName: "inventory") pod "41b80060-486e-4ab2-872a-cfbbdf39b405" (UID: "41b80060-486e-4ab2-872a-cfbbdf39b405"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.732099 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "41b80060-486e-4ab2-872a-cfbbdf39b405" (UID: "41b80060-486e-4ab2-872a-cfbbdf39b405"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.798823 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4lqbr\" (UniqueName: \"kubernetes.io/projected/41b80060-486e-4ab2-872a-cfbbdf39b405-kube-api-access-4lqbr\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.798861 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.798871 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/41b80060-486e-4ab2-872a-cfbbdf39b405-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.967102 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" event={"ID":"41b80060-486e-4ab2-872a-cfbbdf39b405","Type":"ContainerDied","Data":"e7fa95eee745ced0f94b20be8dd0f08e5134a0a47f9fc1df3f2c03734a1a0f7b"} Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.967149 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7fa95eee745ced0f94b20be8dd0f08e5134a0a47f9fc1df3f2c03734a1a0f7b" Feb 27 17:29:34 crc kubenswrapper[4708]: I0227 17:29:34.967206 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-8zf22" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.063934 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mddsz"] Feb 27 17:29:35 crc kubenswrapper[4708]: E0227 17:29:35.064359 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b80060-486e-4ab2-872a-cfbbdf39b405" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.064380 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b80060-486e-4ab2-872a-cfbbdf39b405" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.064565 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b80060-486e-4ab2-872a-cfbbdf39b405" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.065263 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.067462 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.067842 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.067999 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.068149 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.119269 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mddsz"] Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.208212 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.208275 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.208425 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdmv4\" (UniqueName: \"kubernetes.io/projected/8885c7ac-9dbc-4dba-89c1-ea98a342af30-kube-api-access-qdmv4\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.310977 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdmv4\" (UniqueName: \"kubernetes.io/projected/8885c7ac-9dbc-4dba-89c1-ea98a342af30-kube-api-access-qdmv4\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.311095 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.311134 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.316812 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.316892 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.343229 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdmv4\" (UniqueName: \"kubernetes.io/projected/8885c7ac-9dbc-4dba-89c1-ea98a342af30-kube-api-access-qdmv4\") pod \"ssh-known-hosts-edpm-deployment-mddsz\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:35 crc kubenswrapper[4708]: I0227 17:29:35.421186 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:36 crc kubenswrapper[4708]: I0227 17:29:36.133091 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-mddsz"] Feb 27 17:29:36 crc kubenswrapper[4708]: I0227 17:29:36.989498 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" event={"ID":"8885c7ac-9dbc-4dba-89c1-ea98a342af30","Type":"ContainerStarted","Data":"d23095a9d27b605c8e0b9098f7f611debd69c23bce168995533aa736749b5187"} Feb 27 17:29:37 crc kubenswrapper[4708]: I0227 17:29:37.048039 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-szwb6"] Feb 27 17:29:37 crc kubenswrapper[4708]: I0227 17:29:37.055633 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-szwb6"] Feb 27 17:29:38 crc kubenswrapper[4708]: I0227 17:29:38.003896 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" event={"ID":"8885c7ac-9dbc-4dba-89c1-ea98a342af30","Type":"ContainerStarted","Data":"7970598f55ddca1661fd681ab10c44d795d496eb12dbcd3f901f51330c7b01cb"} Feb 27 17:29:38 crc kubenswrapper[4708]: I0227 17:29:38.033953 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" podStartSLOduration=2.332508986 podStartE2EDuration="3.03392177s" podCreationTimestamp="2026-02-27 17:29:35 +0000 UTC" firstStartedPulling="2026-02-27 17:29:36.135592826 +0000 UTC m=+2174.651390453" lastFinishedPulling="2026-02-27 17:29:36.83700565 +0000 UTC m=+2175.352803237" observedRunningTime="2026-02-27 17:29:38.027411947 +0000 UTC m=+2176.543209564" watchObservedRunningTime="2026-02-27 17:29:38.03392177 +0000 UTC m=+2176.549719357" Feb 27 17:29:38 crc kubenswrapper[4708]: I0227 17:29:38.241835 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1" path="/var/lib/kubelet/pods/a7704692-0ff4-44b3-ae6d-cefd6dfdb4c1/volumes" Feb 27 17:29:44 crc kubenswrapper[4708]: I0227 17:29:44.096484 4708 generic.go:334] "Generic (PLEG): container finished" podID="8885c7ac-9dbc-4dba-89c1-ea98a342af30" containerID="7970598f55ddca1661fd681ab10c44d795d496eb12dbcd3f901f51330c7b01cb" exitCode=0 Feb 27 17:29:44 crc kubenswrapper[4708]: I0227 17:29:44.096567 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" event={"ID":"8885c7ac-9dbc-4dba-89c1-ea98a342af30","Type":"ContainerDied","Data":"7970598f55ddca1661fd681ab10c44d795d496eb12dbcd3f901f51330c7b01cb"} Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.739497 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.866417 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdmv4\" (UniqueName: \"kubernetes.io/projected/8885c7ac-9dbc-4dba-89c1-ea98a342af30-kube-api-access-qdmv4\") pod \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.866772 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-inventory-0\") pod \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.866947 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-ssh-key-openstack-edpm-ipam\") pod \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\" (UID: \"8885c7ac-9dbc-4dba-89c1-ea98a342af30\") " Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.872378 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8885c7ac-9dbc-4dba-89c1-ea98a342af30-kube-api-access-qdmv4" (OuterVolumeSpecName: "kube-api-access-qdmv4") pod "8885c7ac-9dbc-4dba-89c1-ea98a342af30" (UID: "8885c7ac-9dbc-4dba-89c1-ea98a342af30"). InnerVolumeSpecName "kube-api-access-qdmv4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.903804 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "8885c7ac-9dbc-4dba-89c1-ea98a342af30" (UID: "8885c7ac-9dbc-4dba-89c1-ea98a342af30"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.905073 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8885c7ac-9dbc-4dba-89c1-ea98a342af30" (UID: "8885c7ac-9dbc-4dba-89c1-ea98a342af30"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.969172 4708 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-inventory-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.969206 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8885c7ac-9dbc-4dba-89c1-ea98a342af30-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:45 crc kubenswrapper[4708]: I0227 17:29:45.969218 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdmv4\" (UniqueName: \"kubernetes.io/projected/8885c7ac-9dbc-4dba-89c1-ea98a342af30-kube-api-access-qdmv4\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.126588 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" event={"ID":"8885c7ac-9dbc-4dba-89c1-ea98a342af30","Type":"ContainerDied","Data":"d23095a9d27b605c8e0b9098f7f611debd69c23bce168995533aa736749b5187"} Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.126809 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d23095a9d27b605c8e0b9098f7f611debd69c23bce168995533aa736749b5187" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.126678 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-mddsz" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.247673 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m"] Feb 27 17:29:46 crc kubenswrapper[4708]: E0227 17:29:46.248101 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8885c7ac-9dbc-4dba-89c1-ea98a342af30" containerName="ssh-known-hosts-edpm-deployment" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.248120 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8885c7ac-9dbc-4dba-89c1-ea98a342af30" containerName="ssh-known-hosts-edpm-deployment" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.248366 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8885c7ac-9dbc-4dba-89c1-ea98a342af30" containerName="ssh-known-hosts-edpm-deployment" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.249220 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.251853 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.252080 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.252464 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.252670 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.257961 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m"] Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.279985 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcgk2\" (UniqueName: \"kubernetes.io/projected/4f284073-5b25-4831-86e7-6b9165c34d73-kube-api-access-jcgk2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.280224 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.280520 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.382871 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.383170 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jcgk2\" (UniqueName: \"kubernetes.io/projected/4f284073-5b25-4831-86e7-6b9165c34d73-kube-api-access-jcgk2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.383355 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.387386 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.389963 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-ssh-key-openstack-edpm-ipam\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.405544 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jcgk2\" (UniqueName: \"kubernetes.io/projected/4f284073-5b25-4831-86e7-6b9165c34d73-kube-api-access-jcgk2\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-tpt6m\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:46 crc kubenswrapper[4708]: I0227 17:29:46.589326 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:47 crc kubenswrapper[4708]: I0227 17:29:47.227566 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m"] Feb 27 17:29:47 crc kubenswrapper[4708]: I0227 17:29:47.230136 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:29:48 crc kubenswrapper[4708]: I0227 17:29:48.148148 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" event={"ID":"4f284073-5b25-4831-86e7-6b9165c34d73","Type":"ContainerStarted","Data":"89c24edc8f5dac211b3aefa27ba5964c273869da0f768625590280d7c7f6bdc9"} Feb 27 17:29:49 crc kubenswrapper[4708]: I0227 17:29:49.170905 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" event={"ID":"4f284073-5b25-4831-86e7-6b9165c34d73","Type":"ContainerStarted","Data":"ac4c21d05cf13de7b812e5bdf79a981dc8f05cfc54c5d6f375339640b6e6047c"} Feb 27 17:29:49 crc kubenswrapper[4708]: I0227 17:29:49.195465 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" podStartSLOduration=2.450413305 podStartE2EDuration="3.19544726s" podCreationTimestamp="2026-02-27 17:29:46 +0000 UTC" firstStartedPulling="2026-02-27 17:29:47.229962121 +0000 UTC m=+2185.745759698" lastFinishedPulling="2026-02-27 17:29:47.974996066 +0000 UTC m=+2186.490793653" observedRunningTime="2026-02-27 17:29:49.19224954 +0000 UTC m=+2187.708047137" watchObservedRunningTime="2026-02-27 17:29:49.19544726 +0000 UTC m=+2187.711244857" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.687372 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rjtc6"] Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.690058 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.700579 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjtc6"] Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.707809 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-catalog-content\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.708808 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckxcs\" (UniqueName: \"kubernetes.io/projected/4e4714e4-4cd7-49c1-ab11-b708629976b1-kube-api-access-ckxcs\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.709058 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-utilities\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.810202 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckxcs\" (UniqueName: \"kubernetes.io/projected/4e4714e4-4cd7-49c1-ab11-b708629976b1-kube-api-access-ckxcs\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.810299 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-utilities\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.810377 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-catalog-content\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.810791 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-catalog-content\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.810944 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-utilities\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:55 crc kubenswrapper[4708]: I0227 17:29:55.836187 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckxcs\" (UniqueName: \"kubernetes.io/projected/4e4714e4-4cd7-49c1-ab11-b708629976b1-kube-api-access-ckxcs\") pod \"redhat-operators-rjtc6\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:56 crc kubenswrapper[4708]: I0227 17:29:56.038444 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:29:56 crc kubenswrapper[4708]: I0227 17:29:56.582437 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rjtc6"] Feb 27 17:29:57 crc kubenswrapper[4708]: I0227 17:29:57.255343 4708 generic.go:334] "Generic (PLEG): container finished" podID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerID="11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6" exitCode=0 Feb 27 17:29:57 crc kubenswrapper[4708]: I0227 17:29:57.255595 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjtc6" event={"ID":"4e4714e4-4cd7-49c1-ab11-b708629976b1","Type":"ContainerDied","Data":"11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6"} Feb 27 17:29:57 crc kubenswrapper[4708]: I0227 17:29:57.255623 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjtc6" event={"ID":"4e4714e4-4cd7-49c1-ab11-b708629976b1","Type":"ContainerStarted","Data":"c7502218e7a4b3765cfc43726ec116fed0b5c7459d6f86b466e37805b76f4e45"} Feb 27 17:29:57 crc kubenswrapper[4708]: I0227 17:29:57.260587 4708 generic.go:334] "Generic (PLEG): container finished" podID="4f284073-5b25-4831-86e7-6b9165c34d73" containerID="ac4c21d05cf13de7b812e5bdf79a981dc8f05cfc54c5d6f375339640b6e6047c" exitCode=0 Feb 27 17:29:57 crc kubenswrapper[4708]: I0227 17:29:57.260668 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" event={"ID":"4f284073-5b25-4831-86e7-6b9165c34d73","Type":"ContainerDied","Data":"ac4c21d05cf13de7b812e5bdf79a981dc8f05cfc54c5d6f375339640b6e6047c"} Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.789280 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.881980 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-inventory\") pod \"4f284073-5b25-4831-86e7-6b9165c34d73\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.882056 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-ssh-key-openstack-edpm-ipam\") pod \"4f284073-5b25-4831-86e7-6b9165c34d73\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.882277 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jcgk2\" (UniqueName: \"kubernetes.io/projected/4f284073-5b25-4831-86e7-6b9165c34d73-kube-api-access-jcgk2\") pod \"4f284073-5b25-4831-86e7-6b9165c34d73\" (UID: \"4f284073-5b25-4831-86e7-6b9165c34d73\") " Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.888351 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f284073-5b25-4831-86e7-6b9165c34d73-kube-api-access-jcgk2" (OuterVolumeSpecName: "kube-api-access-jcgk2") pod "4f284073-5b25-4831-86e7-6b9165c34d73" (UID: "4f284073-5b25-4831-86e7-6b9165c34d73"). InnerVolumeSpecName "kube-api-access-jcgk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.915441 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "4f284073-5b25-4831-86e7-6b9165c34d73" (UID: "4f284073-5b25-4831-86e7-6b9165c34d73"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.920151 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-inventory" (OuterVolumeSpecName: "inventory") pod "4f284073-5b25-4831-86e7-6b9165c34d73" (UID: "4f284073-5b25-4831-86e7-6b9165c34d73"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.984744 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jcgk2\" (UniqueName: \"kubernetes.io/projected/4f284073-5b25-4831-86e7-6b9165c34d73-kube-api-access-jcgk2\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.984797 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:58 crc kubenswrapper[4708]: I0227 17:29:58.984816 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/4f284073-5b25-4831-86e7-6b9165c34d73-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.284874 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjtc6" event={"ID":"4e4714e4-4cd7-49c1-ab11-b708629976b1","Type":"ContainerStarted","Data":"cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5"} Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.287172 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" event={"ID":"4f284073-5b25-4831-86e7-6b9165c34d73","Type":"ContainerDied","Data":"89c24edc8f5dac211b3aefa27ba5964c273869da0f768625590280d7c7f6bdc9"} Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.287212 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89c24edc8f5dac211b3aefa27ba5964c273869da0f768625590280d7c7f6bdc9" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.287472 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-tpt6m" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.480482 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm"] Feb 27 17:29:59 crc kubenswrapper[4708]: E0227 17:29:59.481219 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f284073-5b25-4831-86e7-6b9165c34d73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.481242 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f284073-5b25-4831-86e7-6b9165c34d73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.481543 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f284073-5b25-4831-86e7-6b9165c34d73" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.482414 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.484464 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.485420 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.485542 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.485682 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.492703 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm"] Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.595763 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.595985 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7qs2\" (UniqueName: \"kubernetes.io/projected/2b2d8b39-89e2-4743-910d-c5471b6a327c-kube-api-access-m7qs2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.596221 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.697753 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7qs2\" (UniqueName: \"kubernetes.io/projected/2b2d8b39-89e2-4743-910d-c5471b6a327c-kube-api-access-m7qs2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.697900 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.697993 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.703806 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.712428 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-ssh-key-openstack-edpm-ipam\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.719163 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7qs2\" (UniqueName: \"kubernetes.io/projected/2b2d8b39-89e2-4743-910d-c5471b6a327c-kube-api-access-m7qs2\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:29:59 crc kubenswrapper[4708]: I0227 17:29:59.797591 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.149469 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536890-vd89r"] Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.151217 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-vd89r" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.153143 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.153951 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.154392 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.165036 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn"] Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.166738 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.169368 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.169576 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.188005 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-vd89r"] Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.211731 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn"] Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.311869 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085aa630-d7eb-49b7-8f73-7291681011e7-config-volume\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.312323 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzfwh\" (UniqueName: \"kubernetes.io/projected/756e0e58-e2ac-4348-8ce6-db4fad770f68-kube-api-access-qzfwh\") pod \"auto-csr-approver-29536890-vd89r\" (UID: \"756e0e58-e2ac-4348-8ce6-db4fad770f68\") " pod="openshift-infra/auto-csr-approver-29536890-vd89r" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.312455 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085aa630-d7eb-49b7-8f73-7291681011e7-secret-volume\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.312593 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85544\" (UniqueName: \"kubernetes.io/projected/085aa630-d7eb-49b7-8f73-7291681011e7-kube-api-access-85544\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.409369 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm"] Feb 27 17:30:00 crc kubenswrapper[4708]: W0227 17:30:00.416293 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b2d8b39_89e2_4743_910d_c5471b6a327c.slice/crio-86ce4a93da238151b6b2bb7747ee721feb4d8ad35e2d7c7d3fdb163c27540bfb WatchSource:0}: Error finding container 86ce4a93da238151b6b2bb7747ee721feb4d8ad35e2d7c7d3fdb163c27540bfb: Status 404 returned error can't find the container with id 86ce4a93da238151b6b2bb7747ee721feb4d8ad35e2d7c7d3fdb163c27540bfb Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.418342 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qzfwh\" (UniqueName: \"kubernetes.io/projected/756e0e58-e2ac-4348-8ce6-db4fad770f68-kube-api-access-qzfwh\") pod \"auto-csr-approver-29536890-vd89r\" (UID: \"756e0e58-e2ac-4348-8ce6-db4fad770f68\") " pod="openshift-infra/auto-csr-approver-29536890-vd89r" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.418421 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085aa630-d7eb-49b7-8f73-7291681011e7-secret-volume\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.418478 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85544\" (UniqueName: \"kubernetes.io/projected/085aa630-d7eb-49b7-8f73-7291681011e7-kube-api-access-85544\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.418544 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085aa630-d7eb-49b7-8f73-7291681011e7-config-volume\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.419685 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085aa630-d7eb-49b7-8f73-7291681011e7-config-volume\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.425306 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085aa630-d7eb-49b7-8f73-7291681011e7-secret-volume\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.436261 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85544\" (UniqueName: \"kubernetes.io/projected/085aa630-d7eb-49b7-8f73-7291681011e7-kube-api-access-85544\") pod \"collect-profiles-29536890-knvxn\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.440289 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qzfwh\" (UniqueName: \"kubernetes.io/projected/756e0e58-e2ac-4348-8ce6-db4fad770f68-kube-api-access-qzfwh\") pod \"auto-csr-approver-29536890-vd89r\" (UID: \"756e0e58-e2ac-4348-8ce6-db4fad770f68\") " pod="openshift-infra/auto-csr-approver-29536890-vd89r" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.477453 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-vd89r" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.490250 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:00 crc kubenswrapper[4708]: I0227 17:30:00.945411 4708 scope.go:117] "RemoveContainer" containerID="a6995c6d0a968ffac38663c17c29a199b1455a863c20e7ec885cde8cba392d2c" Feb 27 17:30:01 crc kubenswrapper[4708]: I0227 17:30:01.058043 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-vd89r"] Feb 27 17:30:01 crc kubenswrapper[4708]: I0227 17:30:01.083858 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn"] Feb 27 17:30:01 crc kubenswrapper[4708]: I0227 17:30:01.306923 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" event={"ID":"2b2d8b39-89e2-4743-910d-c5471b6a327c","Type":"ContainerStarted","Data":"86ce4a93da238151b6b2bb7747ee721feb4d8ad35e2d7c7d3fdb163c27540bfb"} Feb 27 17:30:01 crc kubenswrapper[4708]: I0227 17:30:01.308305 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" event={"ID":"085aa630-d7eb-49b7-8f73-7291681011e7","Type":"ContainerStarted","Data":"221f9b094858bcd03a52c546f7398cd0ab2d22ea4fc72295b76b41eccc8ae983"} Feb 27 17:30:01 crc kubenswrapper[4708]: I0227 17:30:01.309592 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536890-vd89r" event={"ID":"756e0e58-e2ac-4348-8ce6-db4fad770f68","Type":"ContainerStarted","Data":"4f6061458e6721d7a51104cdde44de7636aa8d407bd48e30817a0b285222136d"} Feb 27 17:30:02 crc kubenswrapper[4708]: I0227 17:30:02.318011 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" event={"ID":"2b2d8b39-89e2-4743-910d-c5471b6a327c","Type":"ContainerStarted","Data":"59686a6efd71004ca2ac5bd9349e18e9d7c0ff3226ee579fc0f61be9ca2661d3"} Feb 27 17:30:02 crc kubenswrapper[4708]: I0227 17:30:02.319577 4708 generic.go:334] "Generic (PLEG): container finished" podID="085aa630-d7eb-49b7-8f73-7291681011e7" containerID="4c5d81f09c0a26ade0b95567c3ee3477e1cd8af2276ee8e3322621b3a20b01f4" exitCode=0 Feb 27 17:30:02 crc kubenswrapper[4708]: I0227 17:30:02.319618 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" event={"ID":"085aa630-d7eb-49b7-8f73-7291681011e7","Type":"ContainerDied","Data":"4c5d81f09c0a26ade0b95567c3ee3477e1cd8af2276ee8e3322621b3a20b01f4"} Feb 27 17:30:02 crc kubenswrapper[4708]: I0227 17:30:02.378075 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" podStartSLOduration=2.584045099 podStartE2EDuration="3.378055036s" podCreationTimestamp="2026-02-27 17:29:59 +0000 UTC" firstStartedPulling="2026-02-27 17:30:00.419566294 +0000 UTC m=+2198.935363891" lastFinishedPulling="2026-02-27 17:30:01.213576241 +0000 UTC m=+2199.729373828" observedRunningTime="2026-02-27 17:30:02.35024002 +0000 UTC m=+2200.866037607" watchObservedRunningTime="2026-02-27 17:30:02.378055036 +0000 UTC m=+2200.893852623" Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.329748 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536890-vd89r" event={"ID":"756e0e58-e2ac-4348-8ce6-db4fad770f68","Type":"ContainerStarted","Data":"aff1c6f450b4cdee7a7bd72c7e2fc10da262b8925147f756faa1b1399f0bdf7a"} Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.806210 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.822202 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536890-vd89r" podStartSLOduration=2.102786178 podStartE2EDuration="3.822175019s" podCreationTimestamp="2026-02-27 17:30:00 +0000 UTC" firstStartedPulling="2026-02-27 17:30:01.080162065 +0000 UTC m=+2199.595959652" lastFinishedPulling="2026-02-27 17:30:02.799550906 +0000 UTC m=+2201.315348493" observedRunningTime="2026-02-27 17:30:03.349252615 +0000 UTC m=+2201.865050202" watchObservedRunningTime="2026-02-27 17:30:03.822175019 +0000 UTC m=+2202.337972636" Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.905760 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085aa630-d7eb-49b7-8f73-7291681011e7-config-volume\") pod \"085aa630-d7eb-49b7-8f73-7291681011e7\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.905926 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085aa630-d7eb-49b7-8f73-7291681011e7-secret-volume\") pod \"085aa630-d7eb-49b7-8f73-7291681011e7\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.906129 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85544\" (UniqueName: \"kubernetes.io/projected/085aa630-d7eb-49b7-8f73-7291681011e7-kube-api-access-85544\") pod \"085aa630-d7eb-49b7-8f73-7291681011e7\" (UID: \"085aa630-d7eb-49b7-8f73-7291681011e7\") " Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.906672 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/085aa630-d7eb-49b7-8f73-7291681011e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "085aa630-d7eb-49b7-8f73-7291681011e7" (UID: "085aa630-d7eb-49b7-8f73-7291681011e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.911681 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/085aa630-d7eb-49b7-8f73-7291681011e7-kube-api-access-85544" (OuterVolumeSpecName: "kube-api-access-85544") pod "085aa630-d7eb-49b7-8f73-7291681011e7" (UID: "085aa630-d7eb-49b7-8f73-7291681011e7"). InnerVolumeSpecName "kube-api-access-85544". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:03 crc kubenswrapper[4708]: I0227 17:30:03.911866 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/085aa630-d7eb-49b7-8f73-7291681011e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "085aa630-d7eb-49b7-8f73-7291681011e7" (UID: "085aa630-d7eb-49b7-8f73-7291681011e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.014936 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/085aa630-d7eb-49b7-8f73-7291681011e7-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.014969 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/085aa630-d7eb-49b7-8f73-7291681011e7-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.014982 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85544\" (UniqueName: \"kubernetes.io/projected/085aa630-d7eb-49b7-8f73-7291681011e7-kube-api-access-85544\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.343085 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" event={"ID":"085aa630-d7eb-49b7-8f73-7291681011e7","Type":"ContainerDied","Data":"221f9b094858bcd03a52c546f7398cd0ab2d22ea4fc72295b76b41eccc8ae983"} Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.343420 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="221f9b094858bcd03a52c546f7398cd0ab2d22ea4fc72295b76b41eccc8ae983" Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.343823 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn" Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.345128 4708 generic.go:334] "Generic (PLEG): container finished" podID="756e0e58-e2ac-4348-8ce6-db4fad770f68" containerID="aff1c6f450b4cdee7a7bd72c7e2fc10da262b8925147f756faa1b1399f0bdf7a" exitCode=0 Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.345168 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536890-vd89r" event={"ID":"756e0e58-e2ac-4348-8ce6-db4fad770f68","Type":"ContainerDied","Data":"aff1c6f450b4cdee7a7bd72c7e2fc10da262b8925147f756faa1b1399f0bdf7a"} Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.902741 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p"] Feb 27 17:30:04 crc kubenswrapper[4708]: I0227 17:30:04.917884 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536845-ws52p"] Feb 27 17:30:05 crc kubenswrapper[4708]: I0227 17:30:05.904082 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-vd89r" Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.057072 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qzfwh\" (UniqueName: \"kubernetes.io/projected/756e0e58-e2ac-4348-8ce6-db4fad770f68-kube-api-access-qzfwh\") pod \"756e0e58-e2ac-4348-8ce6-db4fad770f68\" (UID: \"756e0e58-e2ac-4348-8ce6-db4fad770f68\") " Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.073007 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/756e0e58-e2ac-4348-8ce6-db4fad770f68-kube-api-access-qzfwh" (OuterVolumeSpecName: "kube-api-access-qzfwh") pod "756e0e58-e2ac-4348-8ce6-db4fad770f68" (UID: "756e0e58-e2ac-4348-8ce6-db4fad770f68"). InnerVolumeSpecName "kube-api-access-qzfwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.160398 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qzfwh\" (UniqueName: \"kubernetes.io/projected/756e0e58-e2ac-4348-8ce6-db4fad770f68-kube-api-access-qzfwh\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.267072 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1284b6e4-1c2c-443e-b18d-163396ede328" path="/var/lib/kubelet/pods/1284b6e4-1c2c-443e-b18d-163396ede328/volumes" Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.371393 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536890-vd89r" event={"ID":"756e0e58-e2ac-4348-8ce6-db4fad770f68","Type":"ContainerDied","Data":"4f6061458e6721d7a51104cdde44de7636aa8d407bd48e30817a0b285222136d"} Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.371432 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f6061458e6721d7a51104cdde44de7636aa8d407bd48e30817a0b285222136d" Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.371491 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536890-vd89r" Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.965559 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-tr5f9"] Feb 27 17:30:06 crc kubenswrapper[4708]: I0227 17:30:06.977094 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536884-tr5f9"] Feb 27 17:30:08 crc kubenswrapper[4708]: I0227 17:30:08.241569 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb7e8057-bcf9-47a0-adfb-85f3ff61ac21" path="/var/lib/kubelet/pods/fb7e8057-bcf9-47a0-adfb-85f3ff61ac21/volumes" Feb 27 17:30:08 crc kubenswrapper[4708]: I0227 17:30:08.394280 4708 generic.go:334] "Generic (PLEG): container finished" podID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerID="cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5" exitCode=0 Feb 27 17:30:08 crc kubenswrapper[4708]: I0227 17:30:08.394378 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjtc6" event={"ID":"4e4714e4-4cd7-49c1-ab11-b708629976b1","Type":"ContainerDied","Data":"cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5"} Feb 27 17:30:09 crc kubenswrapper[4708]: I0227 17:30:09.408610 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjtc6" event={"ID":"4e4714e4-4cd7-49c1-ab11-b708629976b1","Type":"ContainerStarted","Data":"1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb"} Feb 27 17:30:09 crc kubenswrapper[4708]: I0227 17:30:09.429796 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rjtc6" podStartSLOduration=2.867961192 podStartE2EDuration="14.429778717s" podCreationTimestamp="2026-02-27 17:29:55 +0000 UTC" firstStartedPulling="2026-02-27 17:29:57.259731599 +0000 UTC m=+2195.775529186" lastFinishedPulling="2026-02-27 17:30:08.821549104 +0000 UTC m=+2207.337346711" observedRunningTime="2026-02-27 17:30:09.427420242 +0000 UTC m=+2207.943217849" watchObservedRunningTime="2026-02-27 17:30:09.429778717 +0000 UTC m=+2207.945576304" Feb 27 17:30:11 crc kubenswrapper[4708]: I0227 17:30:11.432672 4708 generic.go:334] "Generic (PLEG): container finished" podID="2b2d8b39-89e2-4743-910d-c5471b6a327c" containerID="59686a6efd71004ca2ac5bd9349e18e9d7c0ff3226ee579fc0f61be9ca2661d3" exitCode=0 Feb 27 17:30:11 crc kubenswrapper[4708]: I0227 17:30:11.432758 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" event={"ID":"2b2d8b39-89e2-4743-910d-c5471b6a327c","Type":"ContainerDied","Data":"59686a6efd71004ca2ac5bd9349e18e9d7c0ff3226ee579fc0f61be9ca2661d3"} Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.107690 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.217075 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7qs2\" (UniqueName: \"kubernetes.io/projected/2b2d8b39-89e2-4743-910d-c5471b6a327c-kube-api-access-m7qs2\") pod \"2b2d8b39-89e2-4743-910d-c5471b6a327c\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.217468 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-inventory\") pod \"2b2d8b39-89e2-4743-910d-c5471b6a327c\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.217710 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-ssh-key-openstack-edpm-ipam\") pod \"2b2d8b39-89e2-4743-910d-c5471b6a327c\" (UID: \"2b2d8b39-89e2-4743-910d-c5471b6a327c\") " Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.233027 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b2d8b39-89e2-4743-910d-c5471b6a327c-kube-api-access-m7qs2" (OuterVolumeSpecName: "kube-api-access-m7qs2") pod "2b2d8b39-89e2-4743-910d-c5471b6a327c" (UID: "2b2d8b39-89e2-4743-910d-c5471b6a327c"). InnerVolumeSpecName "kube-api-access-m7qs2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.251176 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-inventory" (OuterVolumeSpecName: "inventory") pod "2b2d8b39-89e2-4743-910d-c5471b6a327c" (UID: "2b2d8b39-89e2-4743-910d-c5471b6a327c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.265813 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2b2d8b39-89e2-4743-910d-c5471b6a327c" (UID: "2b2d8b39-89e2-4743-910d-c5471b6a327c"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.320793 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7qs2\" (UniqueName: \"kubernetes.io/projected/2b2d8b39-89e2-4743-910d-c5471b6a327c-kube-api-access-m7qs2\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.320827 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.320838 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2b2d8b39-89e2-4743-910d-c5471b6a327c-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.453114 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" event={"ID":"2b2d8b39-89e2-4743-910d-c5471b6a327c","Type":"ContainerDied","Data":"86ce4a93da238151b6b2bb7747ee721feb4d8ad35e2d7c7d3fdb163c27540bfb"} Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.453371 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86ce4a93da238151b6b2bb7747ee721feb4d8ad35e2d7c7d3fdb163c27540bfb" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.453199 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.556264 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22"] Feb 27 17:30:13 crc kubenswrapper[4708]: E0227 17:30:13.556618 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="085aa630-d7eb-49b7-8f73-7291681011e7" containerName="collect-profiles" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.556633 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="085aa630-d7eb-49b7-8f73-7291681011e7" containerName="collect-profiles" Feb 27 17:30:13 crc kubenswrapper[4708]: E0227 17:30:13.556662 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="756e0e58-e2ac-4348-8ce6-db4fad770f68" containerName="oc" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.556668 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="756e0e58-e2ac-4348-8ce6-db4fad770f68" containerName="oc" Feb 27 17:30:13 crc kubenswrapper[4708]: E0227 17:30:13.556690 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b2d8b39-89e2-4743-910d-c5471b6a327c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.556698 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b2d8b39-89e2-4743-910d-c5471b6a327c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.556879 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b2d8b39-89e2-4743-910d-c5471b6a327c" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.556912 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="756e0e58-e2ac-4348-8ce6-db4fad770f68" containerName="oc" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.556926 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="085aa630-d7eb-49b7-8f73-7291681011e7" containerName="collect-profiles" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.557557 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.560229 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.560395 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.564226 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.564375 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.565348 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-telemetry-default-certs-0" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.565771 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.567655 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.568669 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.573102 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22"] Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.627136 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.627189 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.627888 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.627951 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.627992 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628029 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628075 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628192 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628230 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628269 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628339 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628377 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628422 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.628508 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntbhr\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-kube-api-access-ntbhr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730444 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730483 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730514 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730562 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730735 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730763 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730800 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntbhr\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-kube-api-access-ntbhr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730834 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730903 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.730988 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.731009 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.731025 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.731048 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.731067 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.735147 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.735695 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.735790 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.736322 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.736495 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.737494 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.738077 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.738746 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.739009 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.739468 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ssh-key-openstack-edpm-ipam\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.740211 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.748052 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.748717 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-telemetry-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.751826 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntbhr\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-kube-api-access-ntbhr\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-wxf22\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:13 crc kubenswrapper[4708]: I0227 17:30:13.880037 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:14 crc kubenswrapper[4708]: I0227 17:30:14.461621 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22"] Feb 27 17:30:14 crc kubenswrapper[4708]: W0227 17:30:14.465620 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6eb203f_b8bd_4a02_8c47_ed0d1490b341.slice/crio-500b7b2a7b84da581ab40fef1d3c04ce17bd8dccf8329cc217eeca676456f69b WatchSource:0}: Error finding container 500b7b2a7b84da581ab40fef1d3c04ce17bd8dccf8329cc217eeca676456f69b: Status 404 returned error can't find the container with id 500b7b2a7b84da581ab40fef1d3c04ce17bd8dccf8329cc217eeca676456f69b Feb 27 17:30:15 crc kubenswrapper[4708]: I0227 17:30:15.477209 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" event={"ID":"e6eb203f-b8bd-4a02-8c47-ed0d1490b341","Type":"ContainerStarted","Data":"c3e52423e0b9d9292511127d2821c363b203281c7eb7f7559eb2dad632294751"} Feb 27 17:30:15 crc kubenswrapper[4708]: I0227 17:30:15.477554 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" event={"ID":"e6eb203f-b8bd-4a02-8c47-ed0d1490b341","Type":"ContainerStarted","Data":"500b7b2a7b84da581ab40fef1d3c04ce17bd8dccf8329cc217eeca676456f69b"} Feb 27 17:30:15 crc kubenswrapper[4708]: I0227 17:30:15.500878 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" podStartSLOduration=1.9951709869999998 podStartE2EDuration="2.500863007s" podCreationTimestamp="2026-02-27 17:30:13 +0000 UTC" firstStartedPulling="2026-02-27 17:30:14.468719437 +0000 UTC m=+2212.984517024" lastFinishedPulling="2026-02-27 17:30:14.974411457 +0000 UTC m=+2213.490209044" observedRunningTime="2026-02-27 17:30:15.498994515 +0000 UTC m=+2214.014792102" watchObservedRunningTime="2026-02-27 17:30:15.500863007 +0000 UTC m=+2214.016660594" Feb 27 17:30:16 crc kubenswrapper[4708]: I0227 17:30:16.039100 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:30:16 crc kubenswrapper[4708]: I0227 17:30:16.039456 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:30:17 crc kubenswrapper[4708]: I0227 17:30:17.117791 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-rjtc6" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="registry-server" probeResult="failure" output=< Feb 27 17:30:17 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:30:17 crc kubenswrapper[4708]: > Feb 27 17:30:19 crc kubenswrapper[4708]: I0227 17:30:19.065402 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-db-sync-nrwjt"] Feb 27 17:30:19 crc kubenswrapper[4708]: I0227 17:30:19.079735 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-db-sync-nrwjt"] Feb 27 17:30:20 crc kubenswrapper[4708]: I0227 17:30:20.246825 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c553d876-99a3-4aed-b8ce-5b7ea04f17d5" path="/var/lib/kubelet/pods/c553d876-99a3-4aed-b8ce-5b7ea04f17d5/volumes" Feb 27 17:30:25 crc kubenswrapper[4708]: I0227 17:30:25.027983 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cloudkitty-storageinit-4sj27"] Feb 27 17:30:25 crc kubenswrapper[4708]: I0227 17:30:25.035554 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cloudkitty-storageinit-4sj27"] Feb 27 17:30:26 crc kubenswrapper[4708]: I0227 17:30:26.093519 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:30:26 crc kubenswrapper[4708]: I0227 17:30:26.150396 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:30:26 crc kubenswrapper[4708]: I0227 17:30:26.250348 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a789b4af-a0dc-41c9-907f-92f896befb9a" path="/var/lib/kubelet/pods/a789b4af-a0dc-41c9-907f-92f896befb9a/volumes" Feb 27 17:30:26 crc kubenswrapper[4708]: I0227 17:30:26.883163 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjtc6"] Feb 27 17:30:27 crc kubenswrapper[4708]: I0227 17:30:27.612937 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-rjtc6" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="registry-server" containerID="cri-o://1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb" gracePeriod=2 Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.250458 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.378367 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-catalog-content\") pod \"4e4714e4-4cd7-49c1-ab11-b708629976b1\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.378878 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-utilities\") pod \"4e4714e4-4cd7-49c1-ab11-b708629976b1\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.379053 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckxcs\" (UniqueName: \"kubernetes.io/projected/4e4714e4-4cd7-49c1-ab11-b708629976b1-kube-api-access-ckxcs\") pod \"4e4714e4-4cd7-49c1-ab11-b708629976b1\" (UID: \"4e4714e4-4cd7-49c1-ab11-b708629976b1\") " Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.379586 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-utilities" (OuterVolumeSpecName: "utilities") pod "4e4714e4-4cd7-49c1-ab11-b708629976b1" (UID: "4e4714e4-4cd7-49c1-ab11-b708629976b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.381078 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.384482 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4714e4-4cd7-49c1-ab11-b708629976b1-kube-api-access-ckxcs" (OuterVolumeSpecName: "kube-api-access-ckxcs") pod "4e4714e4-4cd7-49c1-ab11-b708629976b1" (UID: "4e4714e4-4cd7-49c1-ab11-b708629976b1"). InnerVolumeSpecName "kube-api-access-ckxcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.484195 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckxcs\" (UniqueName: \"kubernetes.io/projected/4e4714e4-4cd7-49c1-ab11-b708629976b1-kube-api-access-ckxcs\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.510254 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4e4714e4-4cd7-49c1-ab11-b708629976b1" (UID: "4e4714e4-4cd7-49c1-ab11-b708629976b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.587203 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4e4714e4-4cd7-49c1-ab11-b708629976b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.626606 4708 generic.go:334] "Generic (PLEG): container finished" podID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerID="1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb" exitCode=0 Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.626644 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjtc6" event={"ID":"4e4714e4-4cd7-49c1-ab11-b708629976b1","Type":"ContainerDied","Data":"1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb"} Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.626653 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rjtc6" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.626670 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rjtc6" event={"ID":"4e4714e4-4cd7-49c1-ab11-b708629976b1","Type":"ContainerDied","Data":"c7502218e7a4b3765cfc43726ec116fed0b5c7459d6f86b466e37805b76f4e45"} Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.626687 4708 scope.go:117] "RemoveContainer" containerID="1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.673892 4708 scope.go:117] "RemoveContainer" containerID="cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.677559 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-rjtc6"] Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.690190 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-rjtc6"] Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.705582 4708 scope.go:117] "RemoveContainer" containerID="11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.759914 4708 scope.go:117] "RemoveContainer" containerID="1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb" Feb 27 17:30:28 crc kubenswrapper[4708]: E0227 17:30:28.760395 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb\": container with ID starting with 1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb not found: ID does not exist" containerID="1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.760449 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb"} err="failed to get container status \"1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb\": rpc error: code = NotFound desc = could not find container \"1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb\": container with ID starting with 1b91df948e7b5a64747fb1e6dd6163bd1cb6768fa88e5ec1787087220ab5b7bb not found: ID does not exist" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.760482 4708 scope.go:117] "RemoveContainer" containerID="cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5" Feb 27 17:30:28 crc kubenswrapper[4708]: E0227 17:30:28.760782 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5\": container with ID starting with cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5 not found: ID does not exist" containerID="cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.760819 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5"} err="failed to get container status \"cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5\": rpc error: code = NotFound desc = could not find container \"cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5\": container with ID starting with cb56ab418b27bc3ab52e8eee07c6281e546e7ec0798f07fb5574f35f471552c5 not found: ID does not exist" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.760865 4708 scope.go:117] "RemoveContainer" containerID="11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6" Feb 27 17:30:28 crc kubenswrapper[4708]: E0227 17:30:28.761126 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6\": container with ID starting with 11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6 not found: ID does not exist" containerID="11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6" Feb 27 17:30:28 crc kubenswrapper[4708]: I0227 17:30:28.761161 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6"} err="failed to get container status \"11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6\": rpc error: code = NotFound desc = could not find container \"11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6\": container with ID starting with 11f31460cf10fd032579997cf230ae064b1f6f89f80fc809b31543eb0688dde6 not found: ID does not exist" Feb 27 17:30:30 crc kubenswrapper[4708]: I0227 17:30:30.245543 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" path="/var/lib/kubelet/pods/4e4714e4-4cd7-49c1-ab11-b708629976b1/volumes" Feb 27 17:30:35 crc kubenswrapper[4708]: I0227 17:30:35.631141 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:30:35 crc kubenswrapper[4708]: I0227 17:30:35.631707 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:30:56 crc kubenswrapper[4708]: I0227 17:30:56.931684 4708 generic.go:334] "Generic (PLEG): container finished" podID="e6eb203f-b8bd-4a02-8c47-ed0d1490b341" containerID="c3e52423e0b9d9292511127d2821c363b203281c7eb7f7559eb2dad632294751" exitCode=0 Feb 27 17:30:56 crc kubenswrapper[4708]: I0227 17:30:56.931827 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" event={"ID":"e6eb203f-b8bd-4a02-8c47-ed0d1490b341","Type":"ContainerDied","Data":"c3e52423e0b9d9292511127d2821c363b203281c7eb7f7559eb2dad632294751"} Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.452822 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642027 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-neutron-metadata-combined-ca-bundle\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642463 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-repo-setup-combined-ca-bundle\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642525 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-libvirt-combined-ca-bundle\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642568 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642618 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntbhr\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-kube-api-access-ntbhr\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642674 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-telemetry-combined-ca-bundle\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642725 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-nova-combined-ca-bundle\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642764 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-telemetry-default-certs-0\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642840 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-ovn-default-certs-0\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642902 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-inventory\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642948 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ovn-combined-ca-bundle\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.642986 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-bootstrap-combined-ca-bundle\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.643036 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ssh-key-openstack-edpm-ipam\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.643078 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\" (UID: \"e6eb203f-b8bd-4a02-8c47-ed0d1490b341\") " Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.649170 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.649708 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.650159 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.650282 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.651485 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.652328 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.652830 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.653321 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.654601 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.657465 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-kube-api-access-ntbhr" (OuterVolumeSpecName: "kube-api-access-ntbhr") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "kube-api-access-ntbhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.658176 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-telemetry-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-telemetry-default-certs-0") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "openstack-edpm-ipam-telemetry-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.662806 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.679709 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-inventory" (OuterVolumeSpecName: "inventory") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.681342 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e6eb203f-b8bd-4a02-8c47-ed0d1490b341" (UID: "e6eb203f-b8bd-4a02-8c47-ed0d1490b341"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746335 4708 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746372 4708 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746384 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746397 4708 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746413 4708 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746425 4708 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746436 4708 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746448 4708 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746460 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntbhr\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-kube-api-access-ntbhr\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746472 4708 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746484 4708 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746495 4708 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-telemetry-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-telemetry-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746537 4708 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.746551 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e6eb203f-b8bd-4a02-8c47-ed0d1490b341-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.962254 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" event={"ID":"e6eb203f-b8bd-4a02-8c47-ed0d1490b341","Type":"ContainerDied","Data":"500b7b2a7b84da581ab40fef1d3c04ce17bd8dccf8329cc217eeca676456f69b"} Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.962319 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="500b7b2a7b84da581ab40fef1d3c04ce17bd8dccf8329cc217eeca676456f69b" Feb 27 17:30:58 crc kubenswrapper[4708]: I0227 17:30:58.962371 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-wxf22" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.108697 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2"] Feb 27 17:30:59 crc kubenswrapper[4708]: E0227 17:30:59.109148 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="extract-content" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.109173 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="extract-content" Feb 27 17:30:59 crc kubenswrapper[4708]: E0227 17:30:59.109190 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="extract-utilities" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.109198 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="extract-utilities" Feb 27 17:30:59 crc kubenswrapper[4708]: E0227 17:30:59.109209 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e6eb203f-b8bd-4a02-8c47-ed0d1490b341" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.109217 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e6eb203f-b8bd-4a02-8c47-ed0d1490b341" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 17:30:59 crc kubenswrapper[4708]: E0227 17:30:59.109232 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="registry-server" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.109237 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="registry-server" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.109407 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e6eb203f-b8bd-4a02-8c47-ed0d1490b341" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.109423 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4714e4-4cd7-49c1-ab11-b708629976b1" containerName="registry-server" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.110185 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.117378 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.117429 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.117737 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.117943 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.117982 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.133251 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2"] Feb 27 17:30:59 crc kubenswrapper[4708]: E0227 17:30:59.172682 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode6eb203f_b8bd_4a02_8c47_ed0d1490b341.slice\": RecentStats: unable to find data in memory cache]" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.285952 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.286314 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p6l5t\" (UniqueName: \"kubernetes.io/projected/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-kube-api-access-p6l5t\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.286342 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.286457 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.286749 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.388964 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.389096 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p6l5t\" (UniqueName: \"kubernetes.io/projected/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-kube-api-access-p6l5t\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.389117 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.389138 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.389210 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.390709 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.393713 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.394768 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ssh-key-openstack-edpm-ipam\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.394939 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.419888 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p6l5t\" (UniqueName: \"kubernetes.io/projected/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-kube-api-access-p6l5t\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-jzmv2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:30:59 crc kubenswrapper[4708]: I0227 17:30:59.435400 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:31:00 crc kubenswrapper[4708]: I0227 17:31:00.054464 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2"] Feb 27 17:31:00 crc kubenswrapper[4708]: I0227 17:31:00.988829 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" event={"ID":"e8a95a5c-facb-48fb-85e3-6f440a9e84b2","Type":"ContainerStarted","Data":"97629866268cfc2b916967b2b91f1ddf71ca752af7772baa0c03b9138c47302e"} Feb 27 17:31:00 crc kubenswrapper[4708]: I0227 17:31:00.989410 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" event={"ID":"e8a95a5c-facb-48fb-85e3-6f440a9e84b2","Type":"ContainerStarted","Data":"d32d7ea1b666a76ae93fca7a674d84277c1d425753c0ca58ac0d32d3960fce27"} Feb 27 17:31:01 crc kubenswrapper[4708]: I0227 17:31:01.021581 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" podStartSLOduration=1.531585507 podStartE2EDuration="2.021557818s" podCreationTimestamp="2026-02-27 17:30:59 +0000 UTC" firstStartedPulling="2026-02-27 17:31:00.054116166 +0000 UTC m=+2258.569913763" lastFinishedPulling="2026-02-27 17:31:00.544088447 +0000 UTC m=+2259.059886074" observedRunningTime="2026-02-27 17:31:01.01302115 +0000 UTC m=+2259.528818777" watchObservedRunningTime="2026-02-27 17:31:01.021557818 +0000 UTC m=+2259.537355445" Feb 27 17:31:01 crc kubenswrapper[4708]: I0227 17:31:01.087656 4708 scope.go:117] "RemoveContainer" containerID="e6a3eb2e2350a21c58cc2a889616119b1b5a2a54bc93e1ad35425a674f98af6d" Feb 27 17:31:01 crc kubenswrapper[4708]: I0227 17:31:01.143173 4708 scope.go:117] "RemoveContainer" containerID="f3d79925d3c93f6bbe5d80d792d5683c9ab04a3a48f1cadd41d8d15df04950bc" Feb 27 17:31:01 crc kubenswrapper[4708]: I0227 17:31:01.209815 4708 scope.go:117] "RemoveContainer" containerID="628ff3379399863dc641f831171bf611437414c1a3bfa51473e1a3f7b4e5e468" Feb 27 17:31:01 crc kubenswrapper[4708]: I0227 17:31:01.266026 4708 scope.go:117] "RemoveContainer" containerID="a8f01e7af3e88c8f59248409dbd41d37754ddfccda0e0f2944ffb70cfed48674" Feb 27 17:31:05 crc kubenswrapper[4708]: I0227 17:31:05.631488 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:31:05 crc kubenswrapper[4708]: I0227 17:31:05.632201 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:31:35 crc kubenswrapper[4708]: I0227 17:31:35.632039 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:31:35 crc kubenswrapper[4708]: I0227 17:31:35.632671 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:31:35 crc kubenswrapper[4708]: I0227 17:31:35.632796 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:31:35 crc kubenswrapper[4708]: I0227 17:31:35.634176 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:31:35 crc kubenswrapper[4708]: I0227 17:31:35.634266 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" gracePeriod=600 Feb 27 17:31:35 crc kubenswrapper[4708]: E0227 17:31:35.765651 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:31:36 crc kubenswrapper[4708]: I0227 17:31:36.425516 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" exitCode=0 Feb 27 17:31:36 crc kubenswrapper[4708]: I0227 17:31:36.425577 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879"} Feb 27 17:31:36 crc kubenswrapper[4708]: I0227 17:31:36.425623 4708 scope.go:117] "RemoveContainer" containerID="fc64fcd853be9a08f141cf8d2540773fd0f62639171cb2f54c41087f21e9f447" Feb 27 17:31:36 crc kubenswrapper[4708]: I0227 17:31:36.426829 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:31:36 crc kubenswrapper[4708]: E0227 17:31:36.427507 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:31:48 crc kubenswrapper[4708]: I0227 17:31:48.228241 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:31:48 crc kubenswrapper[4708]: E0227 17:31:48.229083 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.171005 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536892-xgtq2"] Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.173225 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.176484 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.176815 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.177412 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.181800 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-xgtq2"] Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.356712 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czh9r\" (UniqueName: \"kubernetes.io/projected/59b35ef6-e427-4dda-9aae-fc748d00cc1f-kube-api-access-czh9r\") pod \"auto-csr-approver-29536892-xgtq2\" (UID: \"59b35ef6-e427-4dda-9aae-fc748d00cc1f\") " pod="openshift-infra/auto-csr-approver-29536892-xgtq2" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.459304 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czh9r\" (UniqueName: \"kubernetes.io/projected/59b35ef6-e427-4dda-9aae-fc748d00cc1f-kube-api-access-czh9r\") pod \"auto-csr-approver-29536892-xgtq2\" (UID: \"59b35ef6-e427-4dda-9aae-fc748d00cc1f\") " pod="openshift-infra/auto-csr-approver-29536892-xgtq2" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.496840 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czh9r\" (UniqueName: \"kubernetes.io/projected/59b35ef6-e427-4dda-9aae-fc748d00cc1f-kube-api-access-czh9r\") pod \"auto-csr-approver-29536892-xgtq2\" (UID: \"59b35ef6-e427-4dda-9aae-fc748d00cc1f\") " pod="openshift-infra/auto-csr-approver-29536892-xgtq2" Feb 27 17:32:00 crc kubenswrapper[4708]: I0227 17:32:00.503879 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" Feb 27 17:32:01 crc kubenswrapper[4708]: I0227 17:32:01.030340 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-xgtq2"] Feb 27 17:32:01 crc kubenswrapper[4708]: W0227 17:32:01.032586 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod59b35ef6_e427_4dda_9aae_fc748d00cc1f.slice/crio-8446c2d7a8cf2984b6b7319746154105cc673599c023ecd4bd893495ac312705 WatchSource:0}: Error finding container 8446c2d7a8cf2984b6b7319746154105cc673599c023ecd4bd893495ac312705: Status 404 returned error can't find the container with id 8446c2d7a8cf2984b6b7319746154105cc673599c023ecd4bd893495ac312705 Feb 27 17:32:01 crc kubenswrapper[4708]: I0227 17:32:01.228404 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:32:01 crc kubenswrapper[4708]: E0227 17:32:01.228895 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:32:01 crc kubenswrapper[4708]: I0227 17:32:01.727262 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" event={"ID":"59b35ef6-e427-4dda-9aae-fc748d00cc1f","Type":"ContainerStarted","Data":"8446c2d7a8cf2984b6b7319746154105cc673599c023ecd4bd893495ac312705"} Feb 27 17:32:02 crc kubenswrapper[4708]: I0227 17:32:02.741374 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" event={"ID":"59b35ef6-e427-4dda-9aae-fc748d00cc1f","Type":"ContainerStarted","Data":"399e1c6f8919ffc12de65889c17c93e30a6fc9ac180e7fba7eac9215c3c53834"} Feb 27 17:32:02 crc kubenswrapper[4708]: I0227 17:32:02.760485 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" podStartSLOduration=1.516406698 podStartE2EDuration="2.760465295s" podCreationTimestamp="2026-02-27 17:32:00 +0000 UTC" firstStartedPulling="2026-02-27 17:32:01.035471309 +0000 UTC m=+2319.551268916" lastFinishedPulling="2026-02-27 17:32:02.279529896 +0000 UTC m=+2320.795327513" observedRunningTime="2026-02-27 17:32:02.759672623 +0000 UTC m=+2321.275470210" watchObservedRunningTime="2026-02-27 17:32:02.760465295 +0000 UTC m=+2321.276262882" Feb 27 17:32:03 crc kubenswrapper[4708]: I0227 17:32:03.767744 4708 generic.go:334] "Generic (PLEG): container finished" podID="59b35ef6-e427-4dda-9aae-fc748d00cc1f" containerID="399e1c6f8919ffc12de65889c17c93e30a6fc9ac180e7fba7eac9215c3c53834" exitCode=0 Feb 27 17:32:03 crc kubenswrapper[4708]: I0227 17:32:03.768055 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" event={"ID":"59b35ef6-e427-4dda-9aae-fc748d00cc1f","Type":"ContainerDied","Data":"399e1c6f8919ffc12de65889c17c93e30a6fc9ac180e7fba7eac9215c3c53834"} Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.111897 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.279443 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czh9r\" (UniqueName: \"kubernetes.io/projected/59b35ef6-e427-4dda-9aae-fc748d00cc1f-kube-api-access-czh9r\") pod \"59b35ef6-e427-4dda-9aae-fc748d00cc1f\" (UID: \"59b35ef6-e427-4dda-9aae-fc748d00cc1f\") " Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.289590 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59b35ef6-e427-4dda-9aae-fc748d00cc1f-kube-api-access-czh9r" (OuterVolumeSpecName: "kube-api-access-czh9r") pod "59b35ef6-e427-4dda-9aae-fc748d00cc1f" (UID: "59b35ef6-e427-4dda-9aae-fc748d00cc1f"). InnerVolumeSpecName "kube-api-access-czh9r". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.343075 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-xb2zq"] Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.357660 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536886-xb2zq"] Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.382833 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czh9r\" (UniqueName: \"kubernetes.io/projected/59b35ef6-e427-4dda-9aae-fc748d00cc1f-kube-api-access-czh9r\") on node \"crc\" DevicePath \"\"" Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.791034 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" event={"ID":"59b35ef6-e427-4dda-9aae-fc748d00cc1f","Type":"ContainerDied","Data":"8446c2d7a8cf2984b6b7319746154105cc673599c023ecd4bd893495ac312705"} Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.791397 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8446c2d7a8cf2984b6b7319746154105cc673599c023ecd4bd893495ac312705" Feb 27 17:32:05 crc kubenswrapper[4708]: I0227 17:32:05.791179 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536892-xgtq2" Feb 27 17:32:06 crc kubenswrapper[4708]: I0227 17:32:06.249711 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88206c48-bc8c-4dc7-a05b-50814f0c7446" path="/var/lib/kubelet/pods/88206c48-bc8c-4dc7-a05b-50814f0c7446/volumes" Feb 27 17:32:08 crc kubenswrapper[4708]: I0227 17:32:08.838003 4708 generic.go:334] "Generic (PLEG): container finished" podID="e8a95a5c-facb-48fb-85e3-6f440a9e84b2" containerID="97629866268cfc2b916967b2b91f1ddf71ca752af7772baa0c03b9138c47302e" exitCode=0 Feb 27 17:32:08 crc kubenswrapper[4708]: I0227 17:32:08.838132 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" event={"ID":"e8a95a5c-facb-48fb-85e3-6f440a9e84b2","Type":"ContainerDied","Data":"97629866268cfc2b916967b2b91f1ddf71ca752af7772baa0c03b9138c47302e"} Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.461378 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.612494 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovncontroller-config-0\") pod \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.612782 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovn-combined-ca-bundle\") pod \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.612927 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p6l5t\" (UniqueName: \"kubernetes.io/projected/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-kube-api-access-p6l5t\") pod \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.613786 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-inventory\") pod \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.614007 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ssh-key-openstack-edpm-ipam\") pod \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\" (UID: \"e8a95a5c-facb-48fb-85e3-6f440a9e84b2\") " Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.625134 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "e8a95a5c-facb-48fb-85e3-6f440a9e84b2" (UID: "e8a95a5c-facb-48fb-85e3-6f440a9e84b2"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.625163 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-kube-api-access-p6l5t" (OuterVolumeSpecName: "kube-api-access-p6l5t") pod "e8a95a5c-facb-48fb-85e3-6f440a9e84b2" (UID: "e8a95a5c-facb-48fb-85e3-6f440a9e84b2"). InnerVolumeSpecName "kube-api-access-p6l5t". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.641761 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "e8a95a5c-facb-48fb-85e3-6f440a9e84b2" (UID: "e8a95a5c-facb-48fb-85e3-6f440a9e84b2"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.651604 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "e8a95a5c-facb-48fb-85e3-6f440a9e84b2" (UID: "e8a95a5c-facb-48fb-85e3-6f440a9e84b2"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.670345 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-inventory" (OuterVolumeSpecName: "inventory") pod "e8a95a5c-facb-48fb-85e3-6f440a9e84b2" (UID: "e8a95a5c-facb-48fb-85e3-6f440a9e84b2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.716543 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.716872 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.716889 4708 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.716902 4708 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.716916 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p6l5t\" (UniqueName: \"kubernetes.io/projected/e8a95a5c-facb-48fb-85e3-6f440a9e84b2-kube-api-access-p6l5t\") on node \"crc\" DevicePath \"\"" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.864447 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" event={"ID":"e8a95a5c-facb-48fb-85e3-6f440a9e84b2","Type":"ContainerDied","Data":"d32d7ea1b666a76ae93fca7a674d84277c1d425753c0ca58ac0d32d3960fce27"} Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.864483 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d32d7ea1b666a76ae93fca7a674d84277c1d425753c0ca58ac0d32d3960fce27" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.864522 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-jzmv2" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.967945 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn"] Feb 27 17:32:10 crc kubenswrapper[4708]: E0227 17:32:10.968486 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8a95a5c-facb-48fb-85e3-6f440a9e84b2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.968510 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8a95a5c-facb-48fb-85e3-6f440a9e84b2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 17:32:10 crc kubenswrapper[4708]: E0227 17:32:10.968533 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59b35ef6-e427-4dda-9aae-fc748d00cc1f" containerName="oc" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.968542 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="59b35ef6-e427-4dda-9aae-fc748d00cc1f" containerName="oc" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.968891 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8a95a5c-facb-48fb-85e3-6f440a9e84b2" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.968908 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="59b35ef6-e427-4dda-9aae-fc748d00cc1f" containerName="oc" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.969889 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.973304 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.973385 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.973430 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.973586 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.973677 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.973788 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:32:10 crc kubenswrapper[4708]: I0227 17:32:10.980343 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn"] Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.122278 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vzcxv\" (UniqueName: \"kubernetes.io/projected/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-kube-api-access-vzcxv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.122534 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.122630 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.122796 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.122987 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.123103 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.225071 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.225236 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.225373 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.225454 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.225635 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vzcxv\" (UniqueName: \"kubernetes.io/projected/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-kube-api-access-vzcxv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.225724 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.230151 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.230252 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.231745 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.231880 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-ssh-key-openstack-edpm-ipam\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.232793 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.243277 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vzcxv\" (UniqueName: \"kubernetes.io/projected/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-kube-api-access-vzcxv\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.301348 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:32:11 crc kubenswrapper[4708]: I0227 17:32:11.911843 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn"] Feb 27 17:32:12 crc kubenswrapper[4708]: I0227 17:32:12.883707 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" event={"ID":"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc","Type":"ContainerStarted","Data":"fdae2fc836b4f817bead33b7b53a5be1a090ef6346bec5f0bc41efe493976828"} Feb 27 17:32:12 crc kubenswrapper[4708]: I0227 17:32:12.884045 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" event={"ID":"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc","Type":"ContainerStarted","Data":"432d4569ba0e3201e5f1192c439048c00e9b9f2f41d8046aaafd74eefd42f778"} Feb 27 17:32:12 crc kubenswrapper[4708]: I0227 17:32:12.905468 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" podStartSLOduration=2.398577354 podStartE2EDuration="2.905447397s" podCreationTimestamp="2026-02-27 17:32:10 +0000 UTC" firstStartedPulling="2026-02-27 17:32:11.916868305 +0000 UTC m=+2330.432665922" lastFinishedPulling="2026-02-27 17:32:12.423738338 +0000 UTC m=+2330.939535965" observedRunningTime="2026-02-27 17:32:12.903259936 +0000 UTC m=+2331.419057553" watchObservedRunningTime="2026-02-27 17:32:12.905447397 +0000 UTC m=+2331.421245024" Feb 27 17:32:15 crc kubenswrapper[4708]: I0227 17:32:15.228409 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:32:15 crc kubenswrapper[4708]: E0227 17:32:15.229187 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:32:28 crc kubenswrapper[4708]: I0227 17:32:28.228351 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:32:28 crc kubenswrapper[4708]: E0227 17:32:28.229184 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:32:42 crc kubenswrapper[4708]: I0227 17:32:42.235502 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:32:42 crc kubenswrapper[4708]: E0227 17:32:42.236330 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:32:55 crc kubenswrapper[4708]: I0227 17:32:55.229232 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:32:55 crc kubenswrapper[4708]: E0227 17:32:55.231586 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:33:01 crc kubenswrapper[4708]: I0227 17:33:01.440682 4708 generic.go:334] "Generic (PLEG): container finished" podID="7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" containerID="fdae2fc836b4f817bead33b7b53a5be1a090ef6346bec5f0bc41efe493976828" exitCode=0 Feb 27 17:33:01 crc kubenswrapper[4708]: I0227 17:33:01.440794 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" event={"ID":"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc","Type":"ContainerDied","Data":"fdae2fc836b4f817bead33b7b53a5be1a090ef6346bec5f0bc41efe493976828"} Feb 27 17:33:01 crc kubenswrapper[4708]: I0227 17:33:01.445262 4708 scope.go:117] "RemoveContainer" containerID="5c2b724fdf7187a0815bc2a9c0e344beb78098589310dd5f9a7774da98633049" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.182840 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.327449 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-ssh-key-openstack-edpm-ipam\") pod \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.327606 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vzcxv\" (UniqueName: \"kubernetes.io/projected/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-kube-api-access-vzcxv\") pod \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.327675 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-metadata-combined-ca-bundle\") pod \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.327904 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-ovn-metadata-agent-neutron-config-0\") pod \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.327962 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-inventory\") pod \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.328033 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-nova-metadata-neutron-config-0\") pod \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\" (UID: \"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc\") " Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.333515 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" (UID: "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.344347 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-kube-api-access-vzcxv" (OuterVolumeSpecName: "kube-api-access-vzcxv") pod "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" (UID: "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc"). InnerVolumeSpecName "kube-api-access-vzcxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.355655 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" (UID: "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.359949 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" (UID: "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.368218 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-inventory" (OuterVolumeSpecName: "inventory") pod "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" (UID: "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.386375 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" (UID: "7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.431322 4708 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.431363 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.431383 4708 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.431405 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.431424 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vzcxv\" (UniqueName: \"kubernetes.io/projected/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-kube-api-access-vzcxv\") on node \"crc\" DevicePath \"\"" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.431441 4708 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.460917 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" event={"ID":"7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc","Type":"ContainerDied","Data":"432d4569ba0e3201e5f1192c439048c00e9b9f2f41d8046aaafd74eefd42f778"} Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.460964 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="432d4569ba0e3201e5f1192c439048c00e9b9f2f41d8046aaafd74eefd42f778" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.460996 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.585672 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds"] Feb 27 17:33:03 crc kubenswrapper[4708]: E0227 17:33:03.590261 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.590284 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.590518 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.592070 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.599301 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.599345 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.599474 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.599582 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.599691 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.608478 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds"] Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.738012 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.738249 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.738396 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.738473 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-59bvb\" (UniqueName: \"kubernetes.io/projected/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-kube-api-access-59bvb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.738555 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.840147 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.840530 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-59bvb\" (UniqueName: \"kubernetes.io/projected/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-kube-api-access-59bvb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.840588 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.840797 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.840901 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.846726 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.846987 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-ssh-key-openstack-edpm-ipam\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.847056 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.848784 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.859295 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-59bvb\" (UniqueName: \"kubernetes.io/projected/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-kube-api-access-59bvb\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-gqvds\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:03 crc kubenswrapper[4708]: I0227 17:33:03.913210 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:33:04 crc kubenswrapper[4708]: I0227 17:33:04.506627 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds"] Feb 27 17:33:05 crc kubenswrapper[4708]: I0227 17:33:05.486191 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" event={"ID":"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f","Type":"ContainerStarted","Data":"36ab7b4092be22aaf3dad797e301e5095892ee3267d440e2f9680b2a4bb406f3"} Feb 27 17:33:07 crc kubenswrapper[4708]: I0227 17:33:07.228755 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:33:07 crc kubenswrapper[4708]: E0227 17:33:07.229277 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:33:08 crc kubenswrapper[4708]: I0227 17:33:08.528661 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" event={"ID":"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f","Type":"ContainerStarted","Data":"f63990441727892bdc6fabf596d7576f3a92cf87efcf6fe6fee3534f936cad2f"} Feb 27 17:33:08 crc kubenswrapper[4708]: I0227 17:33:08.575687 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" podStartSLOduration=2.990644511 podStartE2EDuration="5.575665231s" podCreationTimestamp="2026-02-27 17:33:03 +0000 UTC" firstStartedPulling="2026-02-27 17:33:04.510416679 +0000 UTC m=+2383.026214276" lastFinishedPulling="2026-02-27 17:33:07.095437379 +0000 UTC m=+2385.611234996" observedRunningTime="2026-02-27 17:33:08.548825912 +0000 UTC m=+2387.064623539" watchObservedRunningTime="2026-02-27 17:33:08.575665231 +0000 UTC m=+2387.091462828" Feb 27 17:33:21 crc kubenswrapper[4708]: I0227 17:33:21.228283 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:33:21 crc kubenswrapper[4708]: E0227 17:33:21.229054 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:33:36 crc kubenswrapper[4708]: I0227 17:33:36.229644 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:33:36 crc kubenswrapper[4708]: E0227 17:33:36.230620 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.229465 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:33:51 crc kubenswrapper[4708]: E0227 17:33:51.230734 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.751674 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h4qkl"] Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.753797 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.772501 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4qkl"] Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.840594 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-catalog-content\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.840740 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-utilities\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.840765 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhwqw\" (UniqueName: \"kubernetes.io/projected/56569602-8d2c-486f-8b35-c3587a28b78d-kube-api-access-qhwqw\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.942570 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-utilities\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.942623 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhwqw\" (UniqueName: \"kubernetes.io/projected/56569602-8d2c-486f-8b35-c3587a28b78d-kube-api-access-qhwqw\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.942705 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-catalog-content\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.943150 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-catalog-content\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.943364 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-utilities\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:51 crc kubenswrapper[4708]: I0227 17:33:51.964593 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhwqw\" (UniqueName: \"kubernetes.io/projected/56569602-8d2c-486f-8b35-c3587a28b78d-kube-api-access-qhwqw\") pod \"certified-operators-h4qkl\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:52 crc kubenswrapper[4708]: I0227 17:33:52.084097 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:33:52 crc kubenswrapper[4708]: I0227 17:33:52.554638 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h4qkl"] Feb 27 17:33:52 crc kubenswrapper[4708]: I0227 17:33:52.988628 4708 generic.go:334] "Generic (PLEG): container finished" podID="56569602-8d2c-486f-8b35-c3587a28b78d" containerID="9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd" exitCode=0 Feb 27 17:33:52 crc kubenswrapper[4708]: I0227 17:33:52.988671 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4qkl" event={"ID":"56569602-8d2c-486f-8b35-c3587a28b78d","Type":"ContainerDied","Data":"9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd"} Feb 27 17:33:52 crc kubenswrapper[4708]: I0227 17:33:52.988696 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4qkl" event={"ID":"56569602-8d2c-486f-8b35-c3587a28b78d","Type":"ContainerStarted","Data":"1e181e1dd391c19b880a2c495eb13fa8af1ff5326343130a752af73a35f56b03"} Feb 27 17:33:54 crc kubenswrapper[4708]: I0227 17:33:54.000836 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4qkl" event={"ID":"56569602-8d2c-486f-8b35-c3587a28b78d","Type":"ContainerStarted","Data":"0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f"} Feb 27 17:33:55 crc kubenswrapper[4708]: I0227 17:33:55.013377 4708 generic.go:334] "Generic (PLEG): container finished" podID="56569602-8d2c-486f-8b35-c3587a28b78d" containerID="0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f" exitCode=0 Feb 27 17:33:55 crc kubenswrapper[4708]: I0227 17:33:55.013465 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4qkl" event={"ID":"56569602-8d2c-486f-8b35-c3587a28b78d","Type":"ContainerDied","Data":"0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f"} Feb 27 17:33:56 crc kubenswrapper[4708]: I0227 17:33:56.024095 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4qkl" event={"ID":"56569602-8d2c-486f-8b35-c3587a28b78d","Type":"ContainerStarted","Data":"698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1"} Feb 27 17:33:56 crc kubenswrapper[4708]: I0227 17:33:56.043550 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h4qkl" podStartSLOduration=2.57669405 podStartE2EDuration="5.043534781s" podCreationTimestamp="2026-02-27 17:33:51 +0000 UTC" firstStartedPulling="2026-02-27 17:33:52.990388759 +0000 UTC m=+2431.506186346" lastFinishedPulling="2026-02-27 17:33:55.45722948 +0000 UTC m=+2433.973027077" observedRunningTime="2026-02-27 17:33:56.04204358 +0000 UTC m=+2434.557841177" watchObservedRunningTime="2026-02-27 17:33:56.043534781 +0000 UTC m=+2434.559332368" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.178271 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536894-ccrlc"] Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.180684 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-ccrlc" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.183178 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.183364 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.185246 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.192388 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-ccrlc"] Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.223060 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jvf7\" (UniqueName: \"kubernetes.io/projected/9108e6ed-454b-444e-977a-e710b2da2e6c-kube-api-access-5jvf7\") pod \"auto-csr-approver-29536894-ccrlc\" (UID: \"9108e6ed-454b-444e-977a-e710b2da2e6c\") " pod="openshift-infra/auto-csr-approver-29536894-ccrlc" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.325965 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jvf7\" (UniqueName: \"kubernetes.io/projected/9108e6ed-454b-444e-977a-e710b2da2e6c-kube-api-access-5jvf7\") pod \"auto-csr-approver-29536894-ccrlc\" (UID: \"9108e6ed-454b-444e-977a-e710b2da2e6c\") " pod="openshift-infra/auto-csr-approver-29536894-ccrlc" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.348418 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jvf7\" (UniqueName: \"kubernetes.io/projected/9108e6ed-454b-444e-977a-e710b2da2e6c-kube-api-access-5jvf7\") pod \"auto-csr-approver-29536894-ccrlc\" (UID: \"9108e6ed-454b-444e-977a-e710b2da2e6c\") " pod="openshift-infra/auto-csr-approver-29536894-ccrlc" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.515139 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-ccrlc" Feb 27 17:34:00 crc kubenswrapper[4708]: I0227 17:34:00.915864 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-ccrlc"] Feb 27 17:34:01 crc kubenswrapper[4708]: I0227 17:34:01.084512 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536894-ccrlc" event={"ID":"9108e6ed-454b-444e-977a-e710b2da2e6c","Type":"ContainerStarted","Data":"5a7821e43a53604c685aa5bbe5581bb5c1a9b487522a8978bd2bf58fee59f1e1"} Feb 27 17:34:02 crc kubenswrapper[4708]: I0227 17:34:02.084552 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:34:02 crc kubenswrapper[4708]: I0227 17:34:02.084633 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:34:02 crc kubenswrapper[4708]: I0227 17:34:02.140359 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:34:03 crc kubenswrapper[4708]: I0227 17:34:03.109552 4708 generic.go:334] "Generic (PLEG): container finished" podID="9108e6ed-454b-444e-977a-e710b2da2e6c" containerID="1a6d74b88bee2ada5cb9b65b9b16772cbace894c0f76d87ac34a78a94e3219e0" exitCode=0 Feb 27 17:34:03 crc kubenswrapper[4708]: I0227 17:34:03.109642 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536894-ccrlc" event={"ID":"9108e6ed-454b-444e-977a-e710b2da2e6c","Type":"ContainerDied","Data":"1a6d74b88bee2ada5cb9b65b9b16772cbace894c0f76d87ac34a78a94e3219e0"} Feb 27 17:34:03 crc kubenswrapper[4708]: I0227 17:34:03.200657 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:34:03 crc kubenswrapper[4708]: I0227 17:34:03.229038 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:34:03 crc kubenswrapper[4708]: E0227 17:34:03.229779 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:34:04 crc kubenswrapper[4708]: I0227 17:34:04.679472 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-ccrlc" Feb 27 17:34:04 crc kubenswrapper[4708]: I0227 17:34:04.738621 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5jvf7\" (UniqueName: \"kubernetes.io/projected/9108e6ed-454b-444e-977a-e710b2da2e6c-kube-api-access-5jvf7\") pod \"9108e6ed-454b-444e-977a-e710b2da2e6c\" (UID: \"9108e6ed-454b-444e-977a-e710b2da2e6c\") " Feb 27 17:34:04 crc kubenswrapper[4708]: I0227 17:34:04.745112 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9108e6ed-454b-444e-977a-e710b2da2e6c-kube-api-access-5jvf7" (OuterVolumeSpecName: "kube-api-access-5jvf7") pod "9108e6ed-454b-444e-977a-e710b2da2e6c" (UID: "9108e6ed-454b-444e-977a-e710b2da2e6c"). InnerVolumeSpecName "kube-api-access-5jvf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:34:04 crc kubenswrapper[4708]: I0227 17:34:04.841076 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5jvf7\" (UniqueName: \"kubernetes.io/projected/9108e6ed-454b-444e-977a-e710b2da2e6c-kube-api-access-5jvf7\") on node \"crc\" DevicePath \"\"" Feb 27 17:34:05 crc kubenswrapper[4708]: I0227 17:34:05.140427 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536894-ccrlc" event={"ID":"9108e6ed-454b-444e-977a-e710b2da2e6c","Type":"ContainerDied","Data":"5a7821e43a53604c685aa5bbe5581bb5c1a9b487522a8978bd2bf58fee59f1e1"} Feb 27 17:34:05 crc kubenswrapper[4708]: I0227 17:34:05.140482 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a7821e43a53604c685aa5bbe5581bb5c1a9b487522a8978bd2bf58fee59f1e1" Feb 27 17:34:05 crc kubenswrapper[4708]: I0227 17:34:05.141007 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536894-ccrlc" Feb 27 17:34:05 crc kubenswrapper[4708]: I0227 17:34:05.539754 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4qkl"] Feb 27 17:34:05 crc kubenswrapper[4708]: I0227 17:34:05.539983 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h4qkl" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="registry-server" containerID="cri-o://698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1" gracePeriod=2 Feb 27 17:34:05 crc kubenswrapper[4708]: I0227 17:34:05.796996 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-7hhxj"] Feb 27 17:34:05 crc kubenswrapper[4708]: I0227 17:34:05.807458 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536888-7hhxj"] Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.073624 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.159446 4708 generic.go:334] "Generic (PLEG): container finished" podID="56569602-8d2c-486f-8b35-c3587a28b78d" containerID="698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1" exitCode=0 Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.159489 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h4qkl" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.159509 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4qkl" event={"ID":"56569602-8d2c-486f-8b35-c3587a28b78d","Type":"ContainerDied","Data":"698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1"} Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.161198 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h4qkl" event={"ID":"56569602-8d2c-486f-8b35-c3587a28b78d","Type":"ContainerDied","Data":"1e181e1dd391c19b880a2c495eb13fa8af1ff5326343130a752af73a35f56b03"} Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.161223 4708 scope.go:117] "RemoveContainer" containerID="698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.179610 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qhwqw\" (UniqueName: \"kubernetes.io/projected/56569602-8d2c-486f-8b35-c3587a28b78d-kube-api-access-qhwqw\") pod \"56569602-8d2c-486f-8b35-c3587a28b78d\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.180206 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-catalog-content\") pod \"56569602-8d2c-486f-8b35-c3587a28b78d\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.180358 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-utilities\") pod \"56569602-8d2c-486f-8b35-c3587a28b78d\" (UID: \"56569602-8d2c-486f-8b35-c3587a28b78d\") " Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.182018 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-utilities" (OuterVolumeSpecName: "utilities") pod "56569602-8d2c-486f-8b35-c3587a28b78d" (UID: "56569602-8d2c-486f-8b35-c3587a28b78d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.185039 4708 scope.go:117] "RemoveContainer" containerID="0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.185607 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56569602-8d2c-486f-8b35-c3587a28b78d-kube-api-access-qhwqw" (OuterVolumeSpecName: "kube-api-access-qhwqw") pod "56569602-8d2c-486f-8b35-c3587a28b78d" (UID: "56569602-8d2c-486f-8b35-c3587a28b78d"). InnerVolumeSpecName "kube-api-access-qhwqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.243307 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db488d12-6b42-4eda-8827-2c5c174a4e60" path="/var/lib/kubelet/pods/db488d12-6b42-4eda-8827-2c5c174a4e60/volumes" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.253172 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "56569602-8d2c-486f-8b35-c3587a28b78d" (UID: "56569602-8d2c-486f-8b35-c3587a28b78d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.275886 4708 scope.go:117] "RemoveContainer" containerID="9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.282682 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qhwqw\" (UniqueName: \"kubernetes.io/projected/56569602-8d2c-486f-8b35-c3587a28b78d-kube-api-access-qhwqw\") on node \"crc\" DevicePath \"\"" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.282785 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.282838 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/56569602-8d2c-486f-8b35-c3587a28b78d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.330211 4708 scope.go:117] "RemoveContainer" containerID="698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1" Feb 27 17:34:06 crc kubenswrapper[4708]: E0227 17:34:06.332171 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1\": container with ID starting with 698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1 not found: ID does not exist" containerID="698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.332325 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1"} err="failed to get container status \"698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1\": rpc error: code = NotFound desc = could not find container \"698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1\": container with ID starting with 698a4edae103a2708208eb5df5d8d7c74b7b00a21e65573efe98d7e0de3ed9d1 not found: ID does not exist" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.332405 4708 scope.go:117] "RemoveContainer" containerID="0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f" Feb 27 17:34:06 crc kubenswrapper[4708]: E0227 17:34:06.333000 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f\": container with ID starting with 0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f not found: ID does not exist" containerID="0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.333053 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f"} err="failed to get container status \"0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f\": rpc error: code = NotFound desc = could not find container \"0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f\": container with ID starting with 0db9868401b8bc783e193758c477acaad19f3c5c20af820f9c41a711e49d4f6f not found: ID does not exist" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.333083 4708 scope.go:117] "RemoveContainer" containerID="9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd" Feb 27 17:34:06 crc kubenswrapper[4708]: E0227 17:34:06.333402 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd\": container with ID starting with 9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd not found: ID does not exist" containerID="9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.333430 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd"} err="failed to get container status \"9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd\": rpc error: code = NotFound desc = could not find container \"9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd\": container with ID starting with 9c9056f7b80af30d12471cd28eb3585021e10898e3e9a2d59bd9e9f542dbacdd not found: ID does not exist" Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.507902 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h4qkl"] Feb 27 17:34:06 crc kubenswrapper[4708]: I0227 17:34:06.524822 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h4qkl"] Feb 27 17:34:08 crc kubenswrapper[4708]: I0227 17:34:08.249317 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" path="/var/lib/kubelet/pods/56569602-8d2c-486f-8b35-c3587a28b78d/volumes" Feb 27 17:34:15 crc kubenswrapper[4708]: I0227 17:34:15.230128 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:34:15 crc kubenswrapper[4708]: E0227 17:34:15.233034 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:34:28 crc kubenswrapper[4708]: I0227 17:34:28.229213 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:34:28 crc kubenswrapper[4708]: E0227 17:34:28.236612 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:34:42 crc kubenswrapper[4708]: I0227 17:34:42.237118 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:34:42 crc kubenswrapper[4708]: E0227 17:34:42.238169 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.259873 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mc9bf"] Feb 27 17:34:50 crc kubenswrapper[4708]: E0227 17:34:50.262175 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="extract-content" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.262198 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="extract-content" Feb 27 17:34:50 crc kubenswrapper[4708]: E0227 17:34:50.262232 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="extract-utilities" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.262240 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="extract-utilities" Feb 27 17:34:50 crc kubenswrapper[4708]: E0227 17:34:50.262254 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9108e6ed-454b-444e-977a-e710b2da2e6c" containerName="oc" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.262263 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9108e6ed-454b-444e-977a-e710b2da2e6c" containerName="oc" Feb 27 17:34:50 crc kubenswrapper[4708]: E0227 17:34:50.262290 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="registry-server" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.262298 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="registry-server" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.262550 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9108e6ed-454b-444e-977a-e710b2da2e6c" containerName="oc" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.262583 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="56569602-8d2c-486f-8b35-c3587a28b78d" containerName="registry-server" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.264578 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.295693 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mc9bf"] Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.348630 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-utilities\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.349012 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-catalog-content\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.349226 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-schlz\" (UniqueName: \"kubernetes.io/projected/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-kube-api-access-schlz\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.451113 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-catalog-content\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.451224 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-schlz\" (UniqueName: \"kubernetes.io/projected/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-kube-api-access-schlz\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.451350 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-utilities\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.452240 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-utilities\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.452592 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-catalog-content\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.482281 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-schlz\" (UniqueName: \"kubernetes.io/projected/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-kube-api-access-schlz\") pod \"community-operators-mc9bf\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:50 crc kubenswrapper[4708]: I0227 17:34:50.588762 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:34:51 crc kubenswrapper[4708]: I0227 17:34:51.150538 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mc9bf"] Feb 27 17:34:51 crc kubenswrapper[4708]: I0227 17:34:51.702192 4708 generic.go:334] "Generic (PLEG): container finished" podID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerID="976c7c872688d664deed4d88e3646ce1dbb2a18d3f83d377f11ae7026bb3270c" exitCode=0 Feb 27 17:34:51 crc kubenswrapper[4708]: I0227 17:34:51.702295 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mc9bf" event={"ID":"84042df8-bd4b-49a3-9be4-a9a0551bbf7d","Type":"ContainerDied","Data":"976c7c872688d664deed4d88e3646ce1dbb2a18d3f83d377f11ae7026bb3270c"} Feb 27 17:34:51 crc kubenswrapper[4708]: I0227 17:34:51.702609 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mc9bf" event={"ID":"84042df8-bd4b-49a3-9be4-a9a0551bbf7d","Type":"ContainerStarted","Data":"00d17f1f499a1a03f61d15f8d1b4e9c119ec5bdd43547a6977e3cf9f2c9cd1cc"} Feb 27 17:34:51 crc kubenswrapper[4708]: I0227 17:34:51.704403 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:34:52 crc kubenswrapper[4708]: I0227 17:34:52.716455 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mc9bf" event={"ID":"84042df8-bd4b-49a3-9be4-a9a0551bbf7d","Type":"ContainerStarted","Data":"57eadc7f538259f84c7b3652779b6f16a83da4c2b5e538c118277fb6f749b83f"} Feb 27 17:34:54 crc kubenswrapper[4708]: I0227 17:34:54.750844 4708 generic.go:334] "Generic (PLEG): container finished" podID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerID="57eadc7f538259f84c7b3652779b6f16a83da4c2b5e538c118277fb6f749b83f" exitCode=0 Feb 27 17:34:54 crc kubenswrapper[4708]: I0227 17:34:54.750906 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mc9bf" event={"ID":"84042df8-bd4b-49a3-9be4-a9a0551bbf7d","Type":"ContainerDied","Data":"57eadc7f538259f84c7b3652779b6f16a83da4c2b5e538c118277fb6f749b83f"} Feb 27 17:34:55 crc kubenswrapper[4708]: I0227 17:34:55.229016 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:34:55 crc kubenswrapper[4708]: E0227 17:34:55.229838 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:34:55 crc kubenswrapper[4708]: I0227 17:34:55.763254 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mc9bf" event={"ID":"84042df8-bd4b-49a3-9be4-a9a0551bbf7d","Type":"ContainerStarted","Data":"7715de9017c8a88c7951bdb7cd5052bb2c0f85ab7ab7eecf1570e192b5f45254"} Feb 27 17:34:55 crc kubenswrapper[4708]: I0227 17:34:55.803823 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mc9bf" podStartSLOduration=2.23086559 podStartE2EDuration="5.803798345s" podCreationTimestamp="2026-02-27 17:34:50 +0000 UTC" firstStartedPulling="2026-02-27 17:34:51.704118271 +0000 UTC m=+2490.219915878" lastFinishedPulling="2026-02-27 17:34:55.277051006 +0000 UTC m=+2493.792848633" observedRunningTime="2026-02-27 17:34:55.788258192 +0000 UTC m=+2494.304055939" watchObservedRunningTime="2026-02-27 17:34:55.803798345 +0000 UTC m=+2494.319595942" Feb 27 17:35:00 crc kubenswrapper[4708]: I0227 17:35:00.589885 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:35:00 crc kubenswrapper[4708]: I0227 17:35:00.590675 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:35:00 crc kubenswrapper[4708]: I0227 17:35:00.670587 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:35:00 crc kubenswrapper[4708]: I0227 17:35:00.920367 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:35:00 crc kubenswrapper[4708]: I0227 17:35:00.945528 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmph"] Feb 27 17:35:00 crc kubenswrapper[4708]: I0227 17:35:00.948356 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:00 crc kubenswrapper[4708]: I0227 17:35:00.970013 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmph"] Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.017978 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llbbd\" (UniqueName: \"kubernetes.io/projected/1061fc1f-cb95-429a-bc74-4b90adc56ee4-kube-api-access-llbbd\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.018062 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-catalog-content\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.018465 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-utilities\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.121112 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llbbd\" (UniqueName: \"kubernetes.io/projected/1061fc1f-cb95-429a-bc74-4b90adc56ee4-kube-api-access-llbbd\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.121185 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-catalog-content\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.121327 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-utilities\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.121737 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-catalog-content\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.122220 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-utilities\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.152900 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llbbd\" (UniqueName: \"kubernetes.io/projected/1061fc1f-cb95-429a-bc74-4b90adc56ee4-kube-api-access-llbbd\") pod \"redhat-marketplace-5xmph\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.279516 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.564643 4708 scope.go:117] "RemoveContainer" containerID="d252a3f599a71f7772121888e0156ca9310d097f16b6451b544a93e5da1bf35d" Feb 27 17:35:01 crc kubenswrapper[4708]: I0227 17:35:01.851926 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmph"] Feb 27 17:35:02 crc kubenswrapper[4708]: I0227 17:35:02.843960 4708 generic.go:334] "Generic (PLEG): container finished" podID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerID="610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6" exitCode=0 Feb 27 17:35:02 crc kubenswrapper[4708]: I0227 17:35:02.844032 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmph" event={"ID":"1061fc1f-cb95-429a-bc74-4b90adc56ee4","Type":"ContainerDied","Data":"610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6"} Feb 27 17:35:02 crc kubenswrapper[4708]: I0227 17:35:02.844335 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmph" event={"ID":"1061fc1f-cb95-429a-bc74-4b90adc56ee4","Type":"ContainerStarted","Data":"f108d981c3f8f94cf6d90fc047ed6300e7af98654785a3a94e2d14d0ddaa34ea"} Feb 27 17:35:03 crc kubenswrapper[4708]: I0227 17:35:03.315063 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mc9bf"] Feb 27 17:35:03 crc kubenswrapper[4708]: I0227 17:35:03.315341 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mc9bf" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="registry-server" containerID="cri-o://7715de9017c8a88c7951bdb7cd5052bb2c0f85ab7ab7eecf1570e192b5f45254" gracePeriod=2 Feb 27 17:35:04 crc kubenswrapper[4708]: I0227 17:35:04.893874 4708 generic.go:334] "Generic (PLEG): container finished" podID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerID="7715de9017c8a88c7951bdb7cd5052bb2c0f85ab7ab7eecf1570e192b5f45254" exitCode=0 Feb 27 17:35:04 crc kubenswrapper[4708]: I0227 17:35:04.894084 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mc9bf" event={"ID":"84042df8-bd4b-49a3-9be4-a9a0551bbf7d","Type":"ContainerDied","Data":"7715de9017c8a88c7951bdb7cd5052bb2c0f85ab7ab7eecf1570e192b5f45254"} Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.225882 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.327723 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-utilities\") pod \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.328160 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-catalog-content\") pod \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.328304 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-schlz\" (UniqueName: \"kubernetes.io/projected/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-kube-api-access-schlz\") pod \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\" (UID: \"84042df8-bd4b-49a3-9be4-a9a0551bbf7d\") " Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.329178 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-utilities" (OuterVolumeSpecName: "utilities") pod "84042df8-bd4b-49a3-9be4-a9a0551bbf7d" (UID: "84042df8-bd4b-49a3-9be4-a9a0551bbf7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.335360 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-kube-api-access-schlz" (OuterVolumeSpecName: "kube-api-access-schlz") pod "84042df8-bd4b-49a3-9be4-a9a0551bbf7d" (UID: "84042df8-bd4b-49a3-9be4-a9a0551bbf7d"). InnerVolumeSpecName "kube-api-access-schlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.389835 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84042df8-bd4b-49a3-9be4-a9a0551bbf7d" (UID: "84042df8-bd4b-49a3-9be4-a9a0551bbf7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.430743 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.430777 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-schlz\" (UniqueName: \"kubernetes.io/projected/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-kube-api-access-schlz\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.430790 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84042df8-bd4b-49a3-9be4-a9a0551bbf7d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.927814 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mc9bf" event={"ID":"84042df8-bd4b-49a3-9be4-a9a0551bbf7d","Type":"ContainerDied","Data":"00d17f1f499a1a03f61d15f8d1b4e9c119ec5bdd43547a6977e3cf9f2c9cd1cc"} Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.928133 4708 scope.go:117] "RemoveContainer" containerID="7715de9017c8a88c7951bdb7cd5052bb2c0f85ab7ab7eecf1570e192b5f45254" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.927979 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mc9bf" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.962321 4708 scope.go:117] "RemoveContainer" containerID="57eadc7f538259f84c7b3652779b6f16a83da4c2b5e538c118277fb6f749b83f" Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.980877 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mc9bf"] Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.989470 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mc9bf"] Feb 27 17:35:05 crc kubenswrapper[4708]: I0227 17:35:05.997499 4708 scope.go:117] "RemoveContainer" containerID="976c7c872688d664deed4d88e3646ce1dbb2a18d3f83d377f11ae7026bb3270c" Feb 27 17:35:06 crc kubenswrapper[4708]: I0227 17:35:06.252470 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" path="/var/lib/kubelet/pods/84042df8-bd4b-49a3-9be4-a9a0551bbf7d/volumes" Feb 27 17:35:10 crc kubenswrapper[4708]: I0227 17:35:10.229264 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:35:10 crc kubenswrapper[4708]: E0227 17:35:10.231194 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:35:25 crc kubenswrapper[4708]: I0227 17:35:25.228979 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:35:25 crc kubenswrapper[4708]: E0227 17:35:25.229696 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:35:37 crc kubenswrapper[4708]: I0227 17:35:37.229085 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:35:37 crc kubenswrapper[4708]: E0227 17:35:37.230406 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:35:39 crc kubenswrapper[4708]: I0227 17:35:39.296468 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmph" event={"ID":"1061fc1f-cb95-429a-bc74-4b90adc56ee4","Type":"ContainerStarted","Data":"9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873"} Feb 27 17:35:40 crc kubenswrapper[4708]: I0227 17:35:40.310817 4708 generic.go:334] "Generic (PLEG): container finished" podID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerID="9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873" exitCode=0 Feb 27 17:35:40 crc kubenswrapper[4708]: I0227 17:35:40.310912 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmph" event={"ID":"1061fc1f-cb95-429a-bc74-4b90adc56ee4","Type":"ContainerDied","Data":"9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873"} Feb 27 17:35:41 crc kubenswrapper[4708]: I0227 17:35:41.323615 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmph" event={"ID":"1061fc1f-cb95-429a-bc74-4b90adc56ee4","Type":"ContainerStarted","Data":"d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c"} Feb 27 17:35:41 crc kubenswrapper[4708]: I0227 17:35:41.343606 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5xmph" podStartSLOduration=3.1508185810000002 podStartE2EDuration="41.343580101s" podCreationTimestamp="2026-02-27 17:35:00 +0000 UTC" firstStartedPulling="2026-02-27 17:35:02.848061914 +0000 UTC m=+2501.363859531" lastFinishedPulling="2026-02-27 17:35:41.040823464 +0000 UTC m=+2539.556621051" observedRunningTime="2026-02-27 17:35:41.340749702 +0000 UTC m=+2539.856547289" watchObservedRunningTime="2026-02-27 17:35:41.343580101 +0000 UTC m=+2539.859377718" Feb 27 17:35:51 crc kubenswrapper[4708]: I0227 17:35:51.228667 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:35:51 crc kubenswrapper[4708]: E0227 17:35:51.230002 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:35:51 crc kubenswrapper[4708]: I0227 17:35:51.280577 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:51 crc kubenswrapper[4708]: I0227 17:35:51.280869 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:51 crc kubenswrapper[4708]: I0227 17:35:51.348555 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:51 crc kubenswrapper[4708]: I0227 17:35:51.474531 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:51 crc kubenswrapper[4708]: I0227 17:35:51.608358 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmph"] Feb 27 17:35:53 crc kubenswrapper[4708]: I0227 17:35:53.442602 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5xmph" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="registry-server" containerID="cri-o://d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c" gracePeriod=2 Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.003341 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.117821 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llbbd\" (UniqueName: \"kubernetes.io/projected/1061fc1f-cb95-429a-bc74-4b90adc56ee4-kube-api-access-llbbd\") pod \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.118299 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-catalog-content\") pod \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.118468 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-utilities\") pod \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\" (UID: \"1061fc1f-cb95-429a-bc74-4b90adc56ee4\") " Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.119263 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-utilities" (OuterVolumeSpecName: "utilities") pod "1061fc1f-cb95-429a-bc74-4b90adc56ee4" (UID: "1061fc1f-cb95-429a-bc74-4b90adc56ee4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.130094 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1061fc1f-cb95-429a-bc74-4b90adc56ee4-kube-api-access-llbbd" (OuterVolumeSpecName: "kube-api-access-llbbd") pod "1061fc1f-cb95-429a-bc74-4b90adc56ee4" (UID: "1061fc1f-cb95-429a-bc74-4b90adc56ee4"). InnerVolumeSpecName "kube-api-access-llbbd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.141375 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1061fc1f-cb95-429a-bc74-4b90adc56ee4" (UID: "1061fc1f-cb95-429a-bc74-4b90adc56ee4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.221121 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.221168 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1061fc1f-cb95-429a-bc74-4b90adc56ee4-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.221182 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llbbd\" (UniqueName: \"kubernetes.io/projected/1061fc1f-cb95-429a-bc74-4b90adc56ee4-kube-api-access-llbbd\") on node \"crc\" DevicePath \"\"" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.454624 4708 generic.go:334] "Generic (PLEG): container finished" podID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerID="d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c" exitCode=0 Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.454669 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmph" event={"ID":"1061fc1f-cb95-429a-bc74-4b90adc56ee4","Type":"ContainerDied","Data":"d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c"} Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.454696 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5xmph" event={"ID":"1061fc1f-cb95-429a-bc74-4b90adc56ee4","Type":"ContainerDied","Data":"f108d981c3f8f94cf6d90fc047ed6300e7af98654785a3a94e2d14d0ddaa34ea"} Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.454699 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5xmph" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.454713 4708 scope.go:117] "RemoveContainer" containerID="d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.497391 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmph"] Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.499226 4708 scope.go:117] "RemoveContainer" containerID="9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.510876 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5xmph"] Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.543478 4708 scope.go:117] "RemoveContainer" containerID="610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.597319 4708 scope.go:117] "RemoveContainer" containerID="d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c" Feb 27 17:35:54 crc kubenswrapper[4708]: E0227 17:35:54.598142 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c\": container with ID starting with d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c not found: ID does not exist" containerID="d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.598200 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c"} err="failed to get container status \"d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c\": rpc error: code = NotFound desc = could not find container \"d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c\": container with ID starting with d82238c6bc2f991bfb1b9ce281e4f31fca77f079feb53fe2f6e60be784ab3d2c not found: ID does not exist" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.598232 4708 scope.go:117] "RemoveContainer" containerID="9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873" Feb 27 17:35:54 crc kubenswrapper[4708]: E0227 17:35:54.598677 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873\": container with ID starting with 9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873 not found: ID does not exist" containerID="9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.598724 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873"} err="failed to get container status \"9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873\": rpc error: code = NotFound desc = could not find container \"9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873\": container with ID starting with 9080919e0c8e42c39e708e10aa2e37de5aef526491b05e946a8b388b6f73f873 not found: ID does not exist" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.598752 4708 scope.go:117] "RemoveContainer" containerID="610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6" Feb 27 17:35:54 crc kubenswrapper[4708]: E0227 17:35:54.599225 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6\": container with ID starting with 610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6 not found: ID does not exist" containerID="610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6" Feb 27 17:35:54 crc kubenswrapper[4708]: I0227 17:35:54.599258 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6"} err="failed to get container status \"610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6\": rpc error: code = NotFound desc = could not find container \"610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6\": container with ID starting with 610343363c8da86d44f4d8ccdfbd5480647d5c378285cd35a13351c8b8ae35f6 not found: ID does not exist" Feb 27 17:35:56 crc kubenswrapper[4708]: I0227 17:35:56.240445 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" path="/var/lib/kubelet/pods/1061fc1f-cb95-429a-bc74-4b90adc56ee4/volumes" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.182974 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536896-6vwll"] Feb 27 17:36:00 crc kubenswrapper[4708]: E0227 17:36:00.184289 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="extract-content" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184307 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="extract-content" Feb 27 17:36:00 crc kubenswrapper[4708]: E0227 17:36:00.184328 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="registry-server" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184336 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="registry-server" Feb 27 17:36:00 crc kubenswrapper[4708]: E0227 17:36:00.184351 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="extract-utilities" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184360 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="extract-utilities" Feb 27 17:36:00 crc kubenswrapper[4708]: E0227 17:36:00.184387 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="extract-utilities" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184395 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="extract-utilities" Feb 27 17:36:00 crc kubenswrapper[4708]: E0227 17:36:00.184409 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="extract-content" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184417 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="extract-content" Feb 27 17:36:00 crc kubenswrapper[4708]: E0227 17:36:00.184438 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="registry-server" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184447 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="registry-server" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184697 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="84042df8-bd4b-49a3-9be4-a9a0551bbf7d" containerName="registry-server" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.184723 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1061fc1f-cb95-429a-bc74-4b90adc56ee4" containerName="registry-server" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.185631 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-6vwll" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.188342 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.189317 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.189587 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.213328 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-6vwll"] Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.272619 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4228v\" (UniqueName: \"kubernetes.io/projected/70a14662-4bc8-4577-8eb6-f51c9dc4a6b2-kube-api-access-4228v\") pod \"auto-csr-approver-29536896-6vwll\" (UID: \"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2\") " pod="openshift-infra/auto-csr-approver-29536896-6vwll" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.375350 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4228v\" (UniqueName: \"kubernetes.io/projected/70a14662-4bc8-4577-8eb6-f51c9dc4a6b2-kube-api-access-4228v\") pod \"auto-csr-approver-29536896-6vwll\" (UID: \"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2\") " pod="openshift-infra/auto-csr-approver-29536896-6vwll" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.395418 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4228v\" (UniqueName: \"kubernetes.io/projected/70a14662-4bc8-4577-8eb6-f51c9dc4a6b2-kube-api-access-4228v\") pod \"auto-csr-approver-29536896-6vwll\" (UID: \"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2\") " pod="openshift-infra/auto-csr-approver-29536896-6vwll" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.510243 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-6vwll" Feb 27 17:36:00 crc kubenswrapper[4708]: I0227 17:36:00.976078 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-6vwll"] Feb 27 17:36:01 crc kubenswrapper[4708]: I0227 17:36:01.532732 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536896-6vwll" event={"ID":"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2","Type":"ContainerStarted","Data":"d0bbd9fc8ed36817ced2660d0e87d8846f2a0d74270e230f6dea2cf44e50960f"} Feb 27 17:36:02 crc kubenswrapper[4708]: I0227 17:36:02.549683 4708 generic.go:334] "Generic (PLEG): container finished" podID="70a14662-4bc8-4577-8eb6-f51c9dc4a6b2" containerID="0e9c88c3b760e390c1aab0055697761399677f9a8a1f2bb9d244a13d710f12ff" exitCode=0 Feb 27 17:36:02 crc kubenswrapper[4708]: I0227 17:36:02.549747 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536896-6vwll" event={"ID":"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2","Type":"ContainerDied","Data":"0e9c88c3b760e390c1aab0055697761399677f9a8a1f2bb9d244a13d710f12ff"} Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.070052 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-6vwll" Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.229020 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:36:04 crc kubenswrapper[4708]: E0227 17:36:04.229576 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.250893 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4228v\" (UniqueName: \"kubernetes.io/projected/70a14662-4bc8-4577-8eb6-f51c9dc4a6b2-kube-api-access-4228v\") pod \"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2\" (UID: \"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2\") " Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.258246 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70a14662-4bc8-4577-8eb6-f51c9dc4a6b2-kube-api-access-4228v" (OuterVolumeSpecName: "kube-api-access-4228v") pod "70a14662-4bc8-4577-8eb6-f51c9dc4a6b2" (UID: "70a14662-4bc8-4577-8eb6-f51c9dc4a6b2"). InnerVolumeSpecName "kube-api-access-4228v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.353782 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4228v\" (UniqueName: \"kubernetes.io/projected/70a14662-4bc8-4577-8eb6-f51c9dc4a6b2-kube-api-access-4228v\") on node \"crc\" DevicePath \"\"" Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.576967 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536896-6vwll" event={"ID":"70a14662-4bc8-4577-8eb6-f51c9dc4a6b2","Type":"ContainerDied","Data":"d0bbd9fc8ed36817ced2660d0e87d8846f2a0d74270e230f6dea2cf44e50960f"} Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.577018 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0bbd9fc8ed36817ced2660d0e87d8846f2a0d74270e230f6dea2cf44e50960f" Feb 27 17:36:04 crc kubenswrapper[4708]: I0227 17:36:04.577081 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536896-6vwll" Feb 27 17:36:05 crc kubenswrapper[4708]: I0227 17:36:05.171427 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-vd89r"] Feb 27 17:36:05 crc kubenswrapper[4708]: I0227 17:36:05.180315 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536890-vd89r"] Feb 27 17:36:06 crc kubenswrapper[4708]: I0227 17:36:06.240725 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="756e0e58-e2ac-4348-8ce6-db4fad770f68" path="/var/lib/kubelet/pods/756e0e58-e2ac-4348-8ce6-db4fad770f68/volumes" Feb 27 17:36:15 crc kubenswrapper[4708]: I0227 17:36:15.228906 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:36:15 crc kubenswrapper[4708]: E0227 17:36:15.230358 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:36:27 crc kubenswrapper[4708]: I0227 17:36:27.228686 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:36:27 crc kubenswrapper[4708]: E0227 17:36:27.229923 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:36:41 crc kubenswrapper[4708]: I0227 17:36:41.228333 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:36:41 crc kubenswrapper[4708]: I0227 17:36:41.982736 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"439742b868d272d84c1db3500bcc1b293c0585be3463e98cfb78d97fcb4e3465"} Feb 27 17:37:01 crc kubenswrapper[4708]: I0227 17:37:01.747128 4708 scope.go:117] "RemoveContainer" containerID="aff1c6f450b4cdee7a7bd72c7e2fc10da262b8925147f756faa1b1399f0bdf7a" Feb 27 17:37:08 crc kubenswrapper[4708]: I0227 17:37:08.254110 4708 generic.go:334] "Generic (PLEG): container finished" podID="8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" containerID="f63990441727892bdc6fabf596d7576f3a92cf87efcf6fe6fee3534f936cad2f" exitCode=0 Feb 27 17:37:08 crc kubenswrapper[4708]: I0227 17:37:08.254194 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" event={"ID":"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f","Type":"ContainerDied","Data":"f63990441727892bdc6fabf596d7576f3a92cf87efcf6fe6fee3534f936cad2f"} Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.846293 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.971086 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-secret-0\") pod \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.971576 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-inventory\") pod \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.971885 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-combined-ca-bundle\") pod \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.972041 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-59bvb\" (UniqueName: \"kubernetes.io/projected/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-kube-api-access-59bvb\") pod \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.972190 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-ssh-key-openstack-edpm-ipam\") pod \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\" (UID: \"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f\") " Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.978251 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" (UID: "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:09 crc kubenswrapper[4708]: I0227 17:37:09.978654 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-kube-api-access-59bvb" (OuterVolumeSpecName: "kube-api-access-59bvb") pod "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" (UID: "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f"). InnerVolumeSpecName "kube-api-access-59bvb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.003082 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" (UID: "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.003114 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" (UID: "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.005131 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-inventory" (OuterVolumeSpecName: "inventory") pod "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" (UID: "8d8413dc-ed60-4d4e-a1ea-92d3f46de85f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.074597 4708 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.074646 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-59bvb\" (UniqueName: \"kubernetes.io/projected/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-kube-api-access-59bvb\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.074657 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.074670 4708 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.074682 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/8d8413dc-ed60-4d4e-a1ea-92d3f46de85f-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.276362 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" event={"ID":"8d8413dc-ed60-4d4e-a1ea-92d3f46de85f","Type":"ContainerDied","Data":"36ab7b4092be22aaf3dad797e301e5095892ee3267d440e2f9680b2a4bb406f3"} Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.276424 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36ab7b4092be22aaf3dad797e301e5095892ee3267d440e2f9680b2a4bb406f3" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.276452 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-gqvds" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.391127 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s"] Feb 27 17:37:10 crc kubenswrapper[4708]: E0227 17:37:10.391643 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70a14662-4bc8-4577-8eb6-f51c9dc4a6b2" containerName="oc" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.391663 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="70a14662-4bc8-4577-8eb6-f51c9dc4a6b2" containerName="oc" Feb 27 17:37:10 crc kubenswrapper[4708]: E0227 17:37:10.391690 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.391698 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.391930 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="70a14662-4bc8-4577-8eb6-f51c9dc4a6b2" containerName="oc" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.391956 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d8413dc-ed60-4d4e-a1ea-92d3f46de85f" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.392765 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.395077 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.395458 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.395952 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.396364 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.396508 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.396615 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.397011 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.417800 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s"] Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.487630 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.487750 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.487884 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.487991 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.488086 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.488295 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgnf7\" (UniqueName: \"kubernetes.io/projected/991979f1-f211-41ce-b112-fa555006dfec-kube-api-access-kgnf7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.488419 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.488525 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.488582 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.488636 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/991979f1-f211-41ce-b112-fa555006dfec-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.488759 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591255 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591333 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591376 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/991979f1-f211-41ce-b112-fa555006dfec-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591462 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591612 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591658 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591698 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591740 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591776 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591872 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgnf7\" (UniqueName: \"kubernetes.io/projected/991979f1-f211-41ce-b112-fa555006dfec-kube-api-access-kgnf7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.591935 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.594102 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/991979f1-f211-41ce-b112-fa555006dfec-nova-extra-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.598277 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.599495 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-0\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.602770 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-2\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.605287 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-ssh-key-openstack-edpm-ipam\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.605510 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-combined-ca-bundle\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.605583 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.606439 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-1\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.608060 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-3\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.608532 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-inventory\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.612667 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgnf7\" (UniqueName: \"kubernetes.io/projected/991979f1-f211-41ce-b112-fa555006dfec-kube-api-access-kgnf7\") pod \"nova-edpm-deployment-openstack-edpm-ipam-w575s\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:10 crc kubenswrapper[4708]: I0227 17:37:10.713327 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:37:11 crc kubenswrapper[4708]: I0227 17:37:11.384649 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s"] Feb 27 17:37:12 crc kubenswrapper[4708]: I0227 17:37:12.306185 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" event={"ID":"991979f1-f211-41ce-b112-fa555006dfec","Type":"ContainerStarted","Data":"9a9c69666133508c44ec213329bbaddc07eb021fcb026a8168e6ee1362ea8dc2"} Feb 27 17:37:12 crc kubenswrapper[4708]: I0227 17:37:12.309607 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" event={"ID":"991979f1-f211-41ce-b112-fa555006dfec","Type":"ContainerStarted","Data":"04755cddb9e7a51a1871865b44b50872be5f311421c0512964075d2403602aa3"} Feb 27 17:37:12 crc kubenswrapper[4708]: I0227 17:37:12.330321 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" podStartSLOduration=1.8971222490000001 podStartE2EDuration="2.33029874s" podCreationTimestamp="2026-02-27 17:37:10 +0000 UTC" firstStartedPulling="2026-02-27 17:37:11.382009514 +0000 UTC m=+2629.897807111" lastFinishedPulling="2026-02-27 17:37:11.815186015 +0000 UTC m=+2630.330983602" observedRunningTime="2026-02-27 17:37:12.325953299 +0000 UTC m=+2630.841750896" watchObservedRunningTime="2026-02-27 17:37:12.33029874 +0000 UTC m=+2630.846096327" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.144279 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536898-dk72j"] Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.148015 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-dk72j" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.151425 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.152089 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.158533 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.160924 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-dk72j"] Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.251360 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wh29b\" (UniqueName: \"kubernetes.io/projected/1b916670-1fc9-40b4-b106-99c7de6b151a-kube-api-access-wh29b\") pod \"auto-csr-approver-29536898-dk72j\" (UID: \"1b916670-1fc9-40b4-b106-99c7de6b151a\") " pod="openshift-infra/auto-csr-approver-29536898-dk72j" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.354259 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wh29b\" (UniqueName: \"kubernetes.io/projected/1b916670-1fc9-40b4-b106-99c7de6b151a-kube-api-access-wh29b\") pod \"auto-csr-approver-29536898-dk72j\" (UID: \"1b916670-1fc9-40b4-b106-99c7de6b151a\") " pod="openshift-infra/auto-csr-approver-29536898-dk72j" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.381936 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wh29b\" (UniqueName: \"kubernetes.io/projected/1b916670-1fc9-40b4-b106-99c7de6b151a-kube-api-access-wh29b\") pod \"auto-csr-approver-29536898-dk72j\" (UID: \"1b916670-1fc9-40b4-b106-99c7de6b151a\") " pod="openshift-infra/auto-csr-approver-29536898-dk72j" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.469394 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-dk72j" Feb 27 17:38:00 crc kubenswrapper[4708]: I0227 17:38:00.924398 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-dk72j"] Feb 27 17:38:01 crc kubenswrapper[4708]: I0227 17:38:01.792582 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536898-dk72j" event={"ID":"1b916670-1fc9-40b4-b106-99c7de6b151a","Type":"ContainerStarted","Data":"2cf63e360ad9f469d355bbeb24808db3e3ec1b7d291531c22c0cfc592c6abc6b"} Feb 27 17:38:02 crc kubenswrapper[4708]: E0227 17:38:02.067162 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:38:02 crc kubenswrapper[4708]: E0227 17:38:02.067586 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:38:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:38:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh29b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-dk72j_openshift-infra(1b916670-1fc9-40b4-b106-99c7de6b151a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:38:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:38:02 crc kubenswrapper[4708]: E0227 17:38:02.068809 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:38:02 crc kubenswrapper[4708]: E0227 17:38:02.805245 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:39:05 crc kubenswrapper[4708]: I0227 17:39:05.632139 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:39:05 crc kubenswrapper[4708]: I0227 17:39:05.633142 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:39:14 crc kubenswrapper[4708]: E0227 17:39:14.744877 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:39:14 crc kubenswrapper[4708]: E0227 17:39:14.745593 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:39:14 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:39:14 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh29b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-dk72j_openshift-infra(1b916670-1fc9-40b4-b106-99c7de6b151a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:39:14 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:39:14 crc kubenswrapper[4708]: E0227 17:39:14.746823 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:39:25 crc kubenswrapper[4708]: E0227 17:39:25.231810 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:39:35 crc kubenswrapper[4708]: I0227 17:39:35.631659 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:39:35 crc kubenswrapper[4708]: I0227 17:39:35.632272 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:39:37 crc kubenswrapper[4708]: I0227 17:39:37.933180 4708 generic.go:334] "Generic (PLEG): container finished" podID="991979f1-f211-41ce-b112-fa555006dfec" containerID="9a9c69666133508c44ec213329bbaddc07eb021fcb026a8168e6ee1362ea8dc2" exitCode=0 Feb 27 17:39:37 crc kubenswrapper[4708]: I0227 17:39:37.933320 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" event={"ID":"991979f1-f211-41ce-b112-fa555006dfec","Type":"ContainerDied","Data":"9a9c69666133508c44ec213329bbaddc07eb021fcb026a8168e6ee1362ea8dc2"} Feb 27 17:39:39 crc kubenswrapper[4708]: E0227 17:39:39.007375 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:39:39 crc kubenswrapper[4708]: E0227 17:39:39.007954 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:39:39 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:39:39 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-wh29b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536898-dk72j_openshift-infra(1b916670-1fc9-40b4-b106-99c7de6b151a): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:39:39 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:39:39 crc kubenswrapper[4708]: E0227 17:39:39.009300 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.511324 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606060 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-0\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606391 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-0\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606490 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-3\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606508 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-combined-ca-bundle\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606527 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-ssh-key-openstack-edpm-ipam\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606550 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgnf7\" (UniqueName: \"kubernetes.io/projected/991979f1-f211-41ce-b112-fa555006dfec-kube-api-access-kgnf7\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606591 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-2\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606638 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-1\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606771 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/991979f1-f211-41ce-b112-fa555006dfec-nova-extra-config-0\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606898 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-1\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.606939 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-inventory\") pod \"991979f1-f211-41ce-b112-fa555006dfec\" (UID: \"991979f1-f211-41ce-b112-fa555006dfec\") " Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.612717 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/991979f1-f211-41ce-b112-fa555006dfec-kube-api-access-kgnf7" (OuterVolumeSpecName: "kube-api-access-kgnf7") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "kube-api-access-kgnf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.612866 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.639062 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-2" (OuterVolumeSpecName: "nova-cell1-compute-config-2") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-cell1-compute-config-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.648188 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-3" (OuterVolumeSpecName: "nova-cell1-compute-config-3") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-cell1-compute-config-3". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.651482 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.651605 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/991979f1-f211-41ce-b112-fa555006dfec-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.652015 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-inventory" (OuterVolumeSpecName: "inventory") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.658273 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.662483 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.664360 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.672611 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "991979f1-f211-41ce-b112-fa555006dfec" (UID: "991979f1-f211-41ce-b112-fa555006dfec"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709313 4708 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/991979f1-f211-41ce-b112-fa555006dfec-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709507 4708 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709594 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709652 4708 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709717 4708 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709767 4708 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-3\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-3\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709818 4708 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709883 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709936 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgnf7\" (UniqueName: \"kubernetes.io/projected/991979f1-f211-41ce-b112-fa555006dfec-kube-api-access-kgnf7\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.709995 4708 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-2\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-2\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.710048 4708 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/991979f1-f211-41ce-b112-fa555006dfec-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.957105 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" event={"ID":"991979f1-f211-41ce-b112-fa555006dfec","Type":"ContainerDied","Data":"04755cddb9e7a51a1871865b44b50872be5f311421c0512964075d2403602aa3"} Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.957152 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-edpm-deployment-openstack-edpm-ipam-w575s" Feb 27 17:39:39 crc kubenswrapper[4708]: I0227 17:39:39.957190 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04755cddb9e7a51a1871865b44b50872be5f311421c0512964075d2403602aa3" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.069575 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v"] Feb 27 17:39:40 crc kubenswrapper[4708]: E0227 17:39:40.070073 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="991979f1-f211-41ce-b112-fa555006dfec" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.070092 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="991979f1-f211-41ce-b112-fa555006dfec" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.070352 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="991979f1-f211-41ce-b112-fa555006dfec" containerName="nova-edpm-deployment-openstack-edpm-ipam" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.071277 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.073925 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.074080 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-compute-config-data" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.074152 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.074504 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.078995 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-frq2q" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.129372 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v"] Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.242342 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.242418 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.242471 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.242590 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.242643 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.242822 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.242983 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j2bm\" (UniqueName: \"kubernetes.io/projected/f46db3bc-f11b-4634-9916-10c0094d3d5f-kube-api-access-9j2bm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.356226 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.356571 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.356694 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.356900 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.357837 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.358337 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.358684 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9j2bm\" (UniqueName: \"kubernetes.io/projected/f46db3bc-f11b-4634-9916-10c0094d3d5f-kube-api-access-9j2bm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.360826 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-0\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.361207 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-telemetry-combined-ca-bundle\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.361894 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-inventory\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.362692 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-2\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.362799 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-1\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.372147 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ssh-key-openstack-edpm-ipam\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.376751 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9j2bm\" (UniqueName: \"kubernetes.io/projected/f46db3bc-f11b-4634-9916-10c0094d3d5f-kube-api-access-9j2bm\") pod \"telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:40 crc kubenswrapper[4708]: I0227 17:39:40.555894 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:39:41 crc kubenswrapper[4708]: I0227 17:39:41.121625 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v"] Feb 27 17:39:41 crc kubenswrapper[4708]: W0227 17:39:41.131041 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf46db3bc_f11b_4634_9916_10c0094d3d5f.slice/crio-8e0b4c5eeb3dc7e23e34eb42bc8caef60303cb5da37948558baf57a3e05723b1 WatchSource:0}: Error finding container 8e0b4c5eeb3dc7e23e34eb42bc8caef60303cb5da37948558baf57a3e05723b1: Status 404 returned error can't find the container with id 8e0b4c5eeb3dc7e23e34eb42bc8caef60303cb5da37948558baf57a3e05723b1 Feb 27 17:39:41 crc kubenswrapper[4708]: I0227 17:39:41.981048 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" event={"ID":"f46db3bc-f11b-4634-9916-10c0094d3d5f","Type":"ContainerStarted","Data":"8e0b4c5eeb3dc7e23e34eb42bc8caef60303cb5da37948558baf57a3e05723b1"} Feb 27 17:39:42 crc kubenswrapper[4708]: I0227 17:39:42.990606 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" event={"ID":"f46db3bc-f11b-4634-9916-10c0094d3d5f","Type":"ContainerStarted","Data":"610015059feb3f7160301b08500ea248063838e89edf897d679d7dee4475aa1c"} Feb 27 17:39:43 crc kubenswrapper[4708]: I0227 17:39:43.028351 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" podStartSLOduration=2.35664657 podStartE2EDuration="3.028325614s" podCreationTimestamp="2026-02-27 17:39:40 +0000 UTC" firstStartedPulling="2026-02-27 17:39:41.132818812 +0000 UTC m=+2779.648616409" lastFinishedPulling="2026-02-27 17:39:41.804497866 +0000 UTC m=+2780.320295453" observedRunningTime="2026-02-27 17:39:43.026330058 +0000 UTC m=+2781.542127645" watchObservedRunningTime="2026-02-27 17:39:43.028325614 +0000 UTC m=+2781.544123231" Feb 27 17:39:52 crc kubenswrapper[4708]: E0227 17:39:52.246789 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:40:00 crc kubenswrapper[4708]: I0227 17:40:00.169656 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536900-ctdk8"] Feb 27 17:40:00 crc kubenswrapper[4708]: I0227 17:40:00.172476 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" Feb 27 17:40:00 crc kubenswrapper[4708]: I0227 17:40:00.195136 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-ctdk8"] Feb 27 17:40:00 crc kubenswrapper[4708]: I0227 17:40:00.308292 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpq8b\" (UniqueName: \"kubernetes.io/projected/c8a67245-e6f1-4c91-bf78-af3b7d4d77c0-kube-api-access-rpq8b\") pod \"auto-csr-approver-29536900-ctdk8\" (UID: \"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0\") " pod="openshift-infra/auto-csr-approver-29536900-ctdk8" Feb 27 17:40:00 crc kubenswrapper[4708]: I0227 17:40:00.411254 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpq8b\" (UniqueName: \"kubernetes.io/projected/c8a67245-e6f1-4c91-bf78-af3b7d4d77c0-kube-api-access-rpq8b\") pod \"auto-csr-approver-29536900-ctdk8\" (UID: \"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0\") " pod="openshift-infra/auto-csr-approver-29536900-ctdk8" Feb 27 17:40:00 crc kubenswrapper[4708]: I0227 17:40:00.438730 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpq8b\" (UniqueName: \"kubernetes.io/projected/c8a67245-e6f1-4c91-bf78-af3b7d4d77c0-kube-api-access-rpq8b\") pod \"auto-csr-approver-29536900-ctdk8\" (UID: \"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0\") " pod="openshift-infra/auto-csr-approver-29536900-ctdk8" Feb 27 17:40:00 crc kubenswrapper[4708]: I0227 17:40:00.504002 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" Feb 27 17:40:01 crc kubenswrapper[4708]: I0227 17:40:01.084449 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-ctdk8"] Feb 27 17:40:01 crc kubenswrapper[4708]: I0227 17:40:01.094363 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:40:01 crc kubenswrapper[4708]: I0227 17:40:01.243472 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" event={"ID":"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0","Type":"ContainerStarted","Data":"d6ae06b4b6dbc8a755cfb3834b2448d6bbc278470112a07bb3db6d26009462ed"} Feb 27 17:40:02 crc kubenswrapper[4708]: E0227 17:40:02.085375 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:40:02 crc kubenswrapper[4708]: E0227 17:40:02.085818 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:40:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:40:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rpq8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536900-ctdk8_openshift-infra(c8a67245-e6f1-4c91-bf78-af3b7d4d77c0): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:40:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:40:02 crc kubenswrapper[4708]: E0227 17:40:02.087254 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" Feb 27 17:40:02 crc kubenswrapper[4708]: E0227 17:40:02.258668 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" Feb 27 17:40:03 crc kubenswrapper[4708]: E0227 17:40:03.231319 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:40:05 crc kubenswrapper[4708]: I0227 17:40:05.631828 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:40:05 crc kubenswrapper[4708]: I0227 17:40:05.632410 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:40:05 crc kubenswrapper[4708]: I0227 17:40:05.632486 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:40:05 crc kubenswrapper[4708]: I0227 17:40:05.634257 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"439742b868d272d84c1db3500bcc1b293c0585be3463e98cfb78d97fcb4e3465"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:40:05 crc kubenswrapper[4708]: I0227 17:40:05.634332 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://439742b868d272d84c1db3500bcc1b293c0585be3463e98cfb78d97fcb4e3465" gracePeriod=600 Feb 27 17:40:06 crc kubenswrapper[4708]: I0227 17:40:06.297725 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="439742b868d272d84c1db3500bcc1b293c0585be3463e98cfb78d97fcb4e3465" exitCode=0 Feb 27 17:40:06 crc kubenswrapper[4708]: I0227 17:40:06.297897 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"439742b868d272d84c1db3500bcc1b293c0585be3463e98cfb78d97fcb4e3465"} Feb 27 17:40:06 crc kubenswrapper[4708]: I0227 17:40:06.298433 4708 scope.go:117] "RemoveContainer" containerID="9361ba369b0377860857994dd8d8793f31943407bd763d2f6956612400bb0879" Feb 27 17:40:07 crc kubenswrapper[4708]: I0227 17:40:07.311881 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722"} Feb 27 17:40:14 crc kubenswrapper[4708]: E0227 17:40:14.229893 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536898-dk72j" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" Feb 27 17:40:18 crc kubenswrapper[4708]: E0227 17:40:18.690959 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:40:18 crc kubenswrapper[4708]: E0227 17:40:18.691533 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:40:18 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:40:18 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rpq8b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536900-ctdk8_openshift-infra(c8a67245-e6f1-4c91-bf78-af3b7d4d77c0): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:40:18 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:40:18 crc kubenswrapper[4708]: E0227 17:40:18.692727 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.403124 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cmjkm"] Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.406379 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.425909 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmjkm"] Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.601539 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-catalog-content\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.601642 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rhfc\" (UniqueName: \"kubernetes.io/projected/1a3aa83a-b917-43c1-9017-aa9db83770fe-kube-api-access-2rhfc\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.601712 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-utilities\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.703789 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-catalog-content\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.704137 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rhfc\" (UniqueName: \"kubernetes.io/projected/1a3aa83a-b917-43c1-9017-aa9db83770fe-kube-api-access-2rhfc\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.704251 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-catalog-content\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.704264 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-utilities\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.704736 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-utilities\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.733838 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rhfc\" (UniqueName: \"kubernetes.io/projected/1a3aa83a-b917-43c1-9017-aa9db83770fe-kube-api-access-2rhfc\") pod \"redhat-operators-cmjkm\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:20 crc kubenswrapper[4708]: I0227 17:40:20.749761 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:40:21 crc kubenswrapper[4708]: I0227 17:40:21.207388 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cmjkm"] Feb 27 17:40:21 crc kubenswrapper[4708]: I0227 17:40:21.463548 4708 generic.go:334] "Generic (PLEG): container finished" podID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerID="dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964" exitCode=0 Feb 27 17:40:21 crc kubenswrapper[4708]: I0227 17:40:21.463638 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmjkm" event={"ID":"1a3aa83a-b917-43c1-9017-aa9db83770fe","Type":"ContainerDied","Data":"dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964"} Feb 27 17:40:21 crc kubenswrapper[4708]: I0227 17:40:21.463911 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmjkm" event={"ID":"1a3aa83a-b917-43c1-9017-aa9db83770fe","Type":"ContainerStarted","Data":"4aa9f6cb8ff764b3003c59bf4947abce8f0c600f23fd3a4563b82ca441fab9d4"} Feb 27 17:40:22 crc kubenswrapper[4708]: E0227 17:40:22.204998 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:40:22 crc kubenswrapper[4708]: E0227 17:40:22.205120 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cmjkm_openshift-marketplace(1a3aa83a-b917-43c1-9017-aa9db83770fe): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:40:22 crc kubenswrapper[4708]: E0227 17:40:22.206286 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:40:22 crc kubenswrapper[4708]: E0227 17:40:22.494281 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:40:28 crc kubenswrapper[4708]: I0227 17:40:28.557277 4708 generic.go:334] "Generic (PLEG): container finished" podID="1b916670-1fc9-40b4-b106-99c7de6b151a" containerID="2aefdfb9b9b29023d2382c8a46d4050e7b4cffe7a035ff170053619fe05c5487" exitCode=0 Feb 27 17:40:28 crc kubenswrapper[4708]: I0227 17:40:28.557650 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536898-dk72j" event={"ID":"1b916670-1fc9-40b4-b106-99c7de6b151a","Type":"ContainerDied","Data":"2aefdfb9b9b29023d2382c8a46d4050e7b4cffe7a035ff170053619fe05c5487"} Feb 27 17:40:30 crc kubenswrapper[4708]: I0227 17:40:30.039508 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-dk72j" Feb 27 17:40:30 crc kubenswrapper[4708]: I0227 17:40:30.157909 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wh29b\" (UniqueName: \"kubernetes.io/projected/1b916670-1fc9-40b4-b106-99c7de6b151a-kube-api-access-wh29b\") pod \"1b916670-1fc9-40b4-b106-99c7de6b151a\" (UID: \"1b916670-1fc9-40b4-b106-99c7de6b151a\") " Feb 27 17:40:30 crc kubenswrapper[4708]: I0227 17:40:30.166704 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1b916670-1fc9-40b4-b106-99c7de6b151a-kube-api-access-wh29b" (OuterVolumeSpecName: "kube-api-access-wh29b") pod "1b916670-1fc9-40b4-b106-99c7de6b151a" (UID: "1b916670-1fc9-40b4-b106-99c7de6b151a"). InnerVolumeSpecName "kube-api-access-wh29b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:30 crc kubenswrapper[4708]: I0227 17:40:30.260929 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wh29b\" (UniqueName: \"kubernetes.io/projected/1b916670-1fc9-40b4-b106-99c7de6b151a-kube-api-access-wh29b\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:30 crc kubenswrapper[4708]: I0227 17:40:30.586731 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536898-dk72j" event={"ID":"1b916670-1fc9-40b4-b106-99c7de6b151a","Type":"ContainerDied","Data":"2cf63e360ad9f469d355bbeb24808db3e3ec1b7d291531c22c0cfc592c6abc6b"} Feb 27 17:40:30 crc kubenswrapper[4708]: I0227 17:40:30.586773 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cf63e360ad9f469d355bbeb24808db3e3ec1b7d291531c22c0cfc592c6abc6b" Feb 27 17:40:30 crc kubenswrapper[4708]: I0227 17:40:30.586823 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536898-dk72j" Feb 27 17:40:31 crc kubenswrapper[4708]: I0227 17:40:31.153026 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-xgtq2"] Feb 27 17:40:31 crc kubenswrapper[4708]: I0227 17:40:31.167926 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536892-xgtq2"] Feb 27 17:40:32 crc kubenswrapper[4708]: E0227 17:40:32.244140 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" Feb 27 17:40:32 crc kubenswrapper[4708]: I0227 17:40:32.246350 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59b35ef6-e427-4dda-9aae-fc748d00cc1f" path="/var/lib/kubelet/pods/59b35ef6-e427-4dda-9aae-fc748d00cc1f/volumes" Feb 27 17:40:35 crc kubenswrapper[4708]: E0227 17:40:35.914197 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:40:35 crc kubenswrapper[4708]: E0227 17:40:35.914982 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cmjkm_openshift-marketplace(1a3aa83a-b917-43c1-9017-aa9db83770fe): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:40:35 crc kubenswrapper[4708]: E0227 17:40:35.916340 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:40:48 crc kubenswrapper[4708]: E0227 17:40:48.230903 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:40:48 crc kubenswrapper[4708]: I0227 17:40:48.803244 4708 generic.go:334] "Generic (PLEG): container finished" podID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" containerID="b2ae793589e60d534400897341bc947b9a4e6b2c02b79e64f91bcb58b1ea7f9d" exitCode=0 Feb 27 17:40:48 crc kubenswrapper[4708]: I0227 17:40:48.803351 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" event={"ID":"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0","Type":"ContainerDied","Data":"b2ae793589e60d534400897341bc947b9a4e6b2c02b79e64f91bcb58b1ea7f9d"} Feb 27 17:40:50 crc kubenswrapper[4708]: I0227 17:40:50.431814 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" Feb 27 17:40:50 crc kubenswrapper[4708]: I0227 17:40:50.544952 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpq8b\" (UniqueName: \"kubernetes.io/projected/c8a67245-e6f1-4c91-bf78-af3b7d4d77c0-kube-api-access-rpq8b\") pod \"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0\" (UID: \"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0\") " Feb 27 17:40:50 crc kubenswrapper[4708]: I0227 17:40:50.551704 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8a67245-e6f1-4c91-bf78-af3b7d4d77c0-kube-api-access-rpq8b" (OuterVolumeSpecName: "kube-api-access-rpq8b") pod "c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" (UID: "c8a67245-e6f1-4c91-bf78-af3b7d4d77c0"). InnerVolumeSpecName "kube-api-access-rpq8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:40:50 crc kubenswrapper[4708]: I0227 17:40:50.648348 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpq8b\" (UniqueName: \"kubernetes.io/projected/c8a67245-e6f1-4c91-bf78-af3b7d4d77c0-kube-api-access-rpq8b\") on node \"crc\" DevicePath \"\"" Feb 27 17:40:50 crc kubenswrapper[4708]: I0227 17:40:50.834512 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" event={"ID":"c8a67245-e6f1-4c91-bf78-af3b7d4d77c0","Type":"ContainerDied","Data":"d6ae06b4b6dbc8a755cfb3834b2448d6bbc278470112a07bb3db6d26009462ed"} Feb 27 17:40:50 crc kubenswrapper[4708]: I0227 17:40:50.834583 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6ae06b4b6dbc8a755cfb3834b2448d6bbc278470112a07bb3db6d26009462ed" Feb 27 17:40:50 crc kubenswrapper[4708]: I0227 17:40:50.834609 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536900-ctdk8" Feb 27 17:40:51 crc kubenswrapper[4708]: I0227 17:40:51.533149 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-ccrlc"] Feb 27 17:40:51 crc kubenswrapper[4708]: I0227 17:40:51.545997 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536894-ccrlc"] Feb 27 17:40:52 crc kubenswrapper[4708]: I0227 17:40:52.249112 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9108e6ed-454b-444e-977a-e710b2da2e6c" path="/var/lib/kubelet/pods/9108e6ed-454b-444e-977a-e710b2da2e6c/volumes" Feb 27 17:41:01 crc kubenswrapper[4708]: I0227 17:41:01.900081 4708 scope.go:117] "RemoveContainer" containerID="1a6d74b88bee2ada5cb9b65b9b16772cbace894c0f76d87ac34a78a94e3219e0" Feb 27 17:41:01 crc kubenswrapper[4708]: I0227 17:41:01.952339 4708 scope.go:117] "RemoveContainer" containerID="399e1c6f8919ffc12de65889c17c93e30a6fc9ac180e7fba7eac9215c3c53834" Feb 27 17:41:05 crc kubenswrapper[4708]: E0227 17:41:05.167235 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:41:05 crc kubenswrapper[4708]: E0227 17:41:05.167826 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cmjkm_openshift-marketplace(1a3aa83a-b917-43c1-9017-aa9db83770fe): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:41:05 crc kubenswrapper[4708]: E0227 17:41:05.169067 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:41:19 crc kubenswrapper[4708]: E0227 17:41:19.232333 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:41:30 crc kubenswrapper[4708]: E0227 17:41:30.231012 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:41:44 crc kubenswrapper[4708]: E0227 17:41:44.234817 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:41:58 crc kubenswrapper[4708]: E0227 17:41:58.135158 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:41:58 crc kubenswrapper[4708]: E0227 17:41:58.135860 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2rhfc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-cmjkm_openshift-marketplace(1a3aa83a-b917-43c1-9017-aa9db83770fe): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:41:58 crc kubenswrapper[4708]: E0227 17:41:58.137073 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.152678 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536902-mlgds"] Feb 27 17:42:00 crc kubenswrapper[4708]: E0227 17:42:00.153312 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" containerName="oc" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.153324 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" containerName="oc" Feb 27 17:42:00 crc kubenswrapper[4708]: E0227 17:42:00.153350 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" containerName="oc" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.153357 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" containerName="oc" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.153533 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" containerName="oc" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.153545 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" containerName="oc" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.154282 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-mlgds" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.156496 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.156644 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.160676 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.166912 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-mlgds"] Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.268186 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67z66\" (UniqueName: \"kubernetes.io/projected/d6db7652-4a2c-4ae5-9431-ffde3373ae3f-kube-api-access-67z66\") pod \"auto-csr-approver-29536902-mlgds\" (UID: \"d6db7652-4a2c-4ae5-9431-ffde3373ae3f\") " pod="openshift-infra/auto-csr-approver-29536902-mlgds" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.370069 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67z66\" (UniqueName: \"kubernetes.io/projected/d6db7652-4a2c-4ae5-9431-ffde3373ae3f-kube-api-access-67z66\") pod \"auto-csr-approver-29536902-mlgds\" (UID: \"d6db7652-4a2c-4ae5-9431-ffde3373ae3f\") " pod="openshift-infra/auto-csr-approver-29536902-mlgds" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.392736 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67z66\" (UniqueName: \"kubernetes.io/projected/d6db7652-4a2c-4ae5-9431-ffde3373ae3f-kube-api-access-67z66\") pod \"auto-csr-approver-29536902-mlgds\" (UID: \"d6db7652-4a2c-4ae5-9431-ffde3373ae3f\") " pod="openshift-infra/auto-csr-approver-29536902-mlgds" Feb 27 17:42:00 crc kubenswrapper[4708]: I0227 17:42:00.512069 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-mlgds" Feb 27 17:42:01 crc kubenswrapper[4708]: I0227 17:42:01.008908 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-mlgds"] Feb 27 17:42:01 crc kubenswrapper[4708]: I0227 17:42:01.615009 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536902-mlgds" event={"ID":"d6db7652-4a2c-4ae5-9431-ffde3373ae3f","Type":"ContainerStarted","Data":"797b3f1e660956577d9cf19d62d74e3a03b3bffefe09a9df9d8d6849637f40a3"} Feb 27 17:42:01 crc kubenswrapper[4708]: E0227 17:42:01.931836 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:42:01 crc kubenswrapper[4708]: E0227 17:42:01.931992 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:42:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:42:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67z66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536902-mlgds_openshift-infra(d6db7652-4a2c-4ae5-9431-ffde3373ae3f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:42:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:42:01 crc kubenswrapper[4708]: E0227 17:42:01.933252 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536902-mlgds" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" Feb 27 17:42:02 crc kubenswrapper[4708]: E0227 17:42:02.651266 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536902-mlgds" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" Feb 27 17:42:13 crc kubenswrapper[4708]: E0227 17:42:13.232019 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:42:15 crc kubenswrapper[4708]: I0227 17:42:15.795232 4708 generic.go:334] "Generic (PLEG): container finished" podID="f46db3bc-f11b-4634-9916-10c0094d3d5f" containerID="610015059feb3f7160301b08500ea248063838e89edf897d679d7dee4475aa1c" exitCode=0 Feb 27 17:42:15 crc kubenswrapper[4708]: I0227 17:42:15.795338 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" event={"ID":"f46db3bc-f11b-4634-9916-10c0094d3d5f","Type":"ContainerDied","Data":"610015059feb3f7160301b08500ea248063838e89edf897d679d7dee4475aa1c"} Feb 27 17:42:16 crc kubenswrapper[4708]: E0227 17:42:16.301409 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:42:16 crc kubenswrapper[4708]: E0227 17:42:16.301806 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:42:16 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:42:16 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-67z66,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536902-mlgds_openshift-infra(d6db7652-4a2c-4ae5-9431-ffde3373ae3f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:42:16 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:42:16 crc kubenswrapper[4708]: E0227 17:42:16.303073 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536902-mlgds" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.447047 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.563654 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-inventory\") pod \"f46db3bc-f11b-4634-9916-10c0094d3d5f\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.563787 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9j2bm\" (UniqueName: \"kubernetes.io/projected/f46db3bc-f11b-4634-9916-10c0094d3d5f-kube-api-access-9j2bm\") pod \"f46db3bc-f11b-4634-9916-10c0094d3d5f\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.563896 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-1\") pod \"f46db3bc-f11b-4634-9916-10c0094d3d5f\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.563935 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-telemetry-combined-ca-bundle\") pod \"f46db3bc-f11b-4634-9916-10c0094d3d5f\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.564026 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ssh-key-openstack-edpm-ipam\") pod \"f46db3bc-f11b-4634-9916-10c0094d3d5f\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.564088 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-0\") pod \"f46db3bc-f11b-4634-9916-10c0094d3d5f\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.564117 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-2\") pod \"f46db3bc-f11b-4634-9916-10c0094d3d5f\" (UID: \"f46db3bc-f11b-4634-9916-10c0094d3d5f\") " Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.570971 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f46db3bc-f11b-4634-9916-10c0094d3d5f-kube-api-access-9j2bm" (OuterVolumeSpecName: "kube-api-access-9j2bm") pod "f46db3bc-f11b-4634-9916-10c0094d3d5f" (UID: "f46db3bc-f11b-4634-9916-10c0094d3d5f"). InnerVolumeSpecName "kube-api-access-9j2bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.584551 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-telemetry-combined-ca-bundle" (OuterVolumeSpecName: "telemetry-combined-ca-bundle") pod "f46db3bc-f11b-4634-9916-10c0094d3d5f" (UID: "f46db3bc-f11b-4634-9916-10c0094d3d5f"). InnerVolumeSpecName "telemetry-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.599664 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-inventory" (OuterVolumeSpecName: "inventory") pod "f46db3bc-f11b-4634-9916-10c0094d3d5f" (UID: "f46db3bc-f11b-4634-9916-10c0094d3d5f"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.604593 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-1" (OuterVolumeSpecName: "ceilometer-compute-config-data-1") pod "f46db3bc-f11b-4634-9916-10c0094d3d5f" (UID: "f46db3bc-f11b-4634-9916-10c0094d3d5f"). InnerVolumeSpecName "ceilometer-compute-config-data-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.611972 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-2" (OuterVolumeSpecName: "ceilometer-compute-config-data-2") pod "f46db3bc-f11b-4634-9916-10c0094d3d5f" (UID: "f46db3bc-f11b-4634-9916-10c0094d3d5f"). InnerVolumeSpecName "ceilometer-compute-config-data-2". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.612104 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-0" (OuterVolumeSpecName: "ceilometer-compute-config-data-0") pod "f46db3bc-f11b-4634-9916-10c0094d3d5f" (UID: "f46db3bc-f11b-4634-9916-10c0094d3d5f"). InnerVolumeSpecName "ceilometer-compute-config-data-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.614868 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "f46db3bc-f11b-4634-9916-10c0094d3d5f" (UID: "f46db3bc-f11b-4634-9916-10c0094d3d5f"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.667030 4708 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-inventory\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.667068 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9j2bm\" (UniqueName: \"kubernetes.io/projected/f46db3bc-f11b-4634-9916-10c0094d3d5f-kube-api-access-9j2bm\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.667085 4708 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-1\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-1\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.667099 4708 reconciler_common.go:293] "Volume detached for volume \"telemetry-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-telemetry-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.667112 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.667124 4708 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-0\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-0\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.667137 4708 reconciler_common.go:293] "Volume detached for volume \"ceilometer-compute-config-data-2\" (UniqueName: \"kubernetes.io/secret/f46db3bc-f11b-4634-9916-10c0094d3d5f-ceilometer-compute-config-data-2\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.850842 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" event={"ID":"f46db3bc-f11b-4634-9916-10c0094d3d5f","Type":"ContainerDied","Data":"8e0b4c5eeb3dc7e23e34eb42bc8caef60303cb5da37948558baf57a3e05723b1"} Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.851241 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e0b4c5eeb3dc7e23e34eb42bc8caef60303cb5da37948558baf57a3e05723b1" Feb 27 17:42:17 crc kubenswrapper[4708]: I0227 17:42:17.851028 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v" Feb 27 17:42:24 crc kubenswrapper[4708]: E0227 17:42:24.233246 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:42:27 crc kubenswrapper[4708]: E0227 17:42:27.230754 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536902-mlgds" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" Feb 27 17:42:35 crc kubenswrapper[4708]: E0227 17:42:35.231135 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:42:35 crc kubenswrapper[4708]: I0227 17:42:35.631311 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:42:35 crc kubenswrapper[4708]: I0227 17:42:35.631664 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:42:44 crc kubenswrapper[4708]: I0227 17:42:44.172576 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536902-mlgds" event={"ID":"d6db7652-4a2c-4ae5-9431-ffde3373ae3f","Type":"ContainerStarted","Data":"36261dd96463185d9e900b72935ab212f2c797ee175368fe4afc0d52f009b1e3"} Feb 27 17:42:44 crc kubenswrapper[4708]: I0227 17:42:44.199212 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536902-mlgds" podStartSLOduration=1.448629478 podStartE2EDuration="44.199194792s" podCreationTimestamp="2026-02-27 17:42:00 +0000 UTC" firstStartedPulling="2026-02-27 17:42:01.010875841 +0000 UTC m=+2919.526673428" lastFinishedPulling="2026-02-27 17:42:43.761441165 +0000 UTC m=+2962.277238742" observedRunningTime="2026-02-27 17:42:44.191145806 +0000 UTC m=+2962.706943393" watchObservedRunningTime="2026-02-27 17:42:44.199194792 +0000 UTC m=+2962.714992369" Feb 27 17:42:45 crc kubenswrapper[4708]: I0227 17:42:45.184503 4708 generic.go:334] "Generic (PLEG): container finished" podID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" containerID="36261dd96463185d9e900b72935ab212f2c797ee175368fe4afc0d52f009b1e3" exitCode=0 Feb 27 17:42:45 crc kubenswrapper[4708]: I0227 17:42:45.184552 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536902-mlgds" event={"ID":"d6db7652-4a2c-4ae5-9431-ffde3373ae3f","Type":"ContainerDied","Data":"36261dd96463185d9e900b72935ab212f2c797ee175368fe4afc0d52f009b1e3"} Feb 27 17:42:46 crc kubenswrapper[4708]: I0227 17:42:46.647809 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-mlgds" Feb 27 17:42:46 crc kubenswrapper[4708]: I0227 17:42:46.753260 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67z66\" (UniqueName: \"kubernetes.io/projected/d6db7652-4a2c-4ae5-9431-ffde3373ae3f-kube-api-access-67z66\") pod \"d6db7652-4a2c-4ae5-9431-ffde3373ae3f\" (UID: \"d6db7652-4a2c-4ae5-9431-ffde3373ae3f\") " Feb 27 17:42:46 crc kubenswrapper[4708]: I0227 17:42:46.771748 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6db7652-4a2c-4ae5-9431-ffde3373ae3f-kube-api-access-67z66" (OuterVolumeSpecName: "kube-api-access-67z66") pod "d6db7652-4a2c-4ae5-9431-ffde3373ae3f" (UID: "d6db7652-4a2c-4ae5-9431-ffde3373ae3f"). InnerVolumeSpecName "kube-api-access-67z66". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:42:46 crc kubenswrapper[4708]: I0227 17:42:46.856440 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67z66\" (UniqueName: \"kubernetes.io/projected/d6db7652-4a2c-4ae5-9431-ffde3373ae3f-kube-api-access-67z66\") on node \"crc\" DevicePath \"\"" Feb 27 17:42:47 crc kubenswrapper[4708]: I0227 17:42:47.220830 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536902-mlgds" event={"ID":"d6db7652-4a2c-4ae5-9431-ffde3373ae3f","Type":"ContainerDied","Data":"797b3f1e660956577d9cf19d62d74e3a03b3bffefe09a9df9d8d6849637f40a3"} Feb 27 17:42:47 crc kubenswrapper[4708]: I0227 17:42:47.220899 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="797b3f1e660956577d9cf19d62d74e3a03b3bffefe09a9df9d8d6849637f40a3" Feb 27 17:42:47 crc kubenswrapper[4708]: I0227 17:42:47.221183 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536902-mlgds" Feb 27 17:42:47 crc kubenswrapper[4708]: I0227 17:42:47.272808 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-6vwll"] Feb 27 17:42:47 crc kubenswrapper[4708]: I0227 17:42:47.282247 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536896-6vwll"] Feb 27 17:42:48 crc kubenswrapper[4708]: I0227 17:42:48.240131 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70a14662-4bc8-4577-8eb6-f51c9dc4a6b2" path="/var/lib/kubelet/pods/70a14662-4bc8-4577-8eb6-f51c9dc4a6b2/volumes" Feb 27 17:42:49 crc kubenswrapper[4708]: E0227 17:42:49.230808 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:43:02 crc kubenswrapper[4708]: I0227 17:43:02.119293 4708 scope.go:117] "RemoveContainer" containerID="0e9c88c3b760e390c1aab0055697761399677f9a8a1f2bb9d244a13d710f12ff" Feb 27 17:43:04 crc kubenswrapper[4708]: E0227 17:43:04.230109 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:43:05 crc kubenswrapper[4708]: I0227 17:43:05.631479 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:43:05 crc kubenswrapper[4708]: I0227 17:43:05.631772 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:43:15 crc kubenswrapper[4708]: E0227 17:43:15.232534 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" Feb 27 17:43:29 crc kubenswrapper[4708]: I0227 17:43:29.715696 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmjkm" event={"ID":"1a3aa83a-b917-43c1-9017-aa9db83770fe","Type":"ContainerStarted","Data":"3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a"} Feb 27 17:43:33 crc kubenswrapper[4708]: I0227 17:43:33.767633 4708 generic.go:334] "Generic (PLEG): container finished" podID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerID="3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a" exitCode=0 Feb 27 17:43:33 crc kubenswrapper[4708]: I0227 17:43:33.768039 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmjkm" event={"ID":"1a3aa83a-b917-43c1-9017-aa9db83770fe","Type":"ContainerDied","Data":"3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a"} Feb 27 17:43:34 crc kubenswrapper[4708]: I0227 17:43:34.781099 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmjkm" event={"ID":"1a3aa83a-b917-43c1-9017-aa9db83770fe","Type":"ContainerStarted","Data":"08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e"} Feb 27 17:43:34 crc kubenswrapper[4708]: I0227 17:43:34.803576 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cmjkm" podStartSLOduration=1.790825167 podStartE2EDuration="3m14.803557728s" podCreationTimestamp="2026-02-27 17:40:20 +0000 UTC" firstStartedPulling="2026-02-27 17:40:21.465070737 +0000 UTC m=+2819.980868324" lastFinishedPulling="2026-02-27 17:43:34.477803288 +0000 UTC m=+3012.993600885" observedRunningTime="2026-02-27 17:43:34.796665825 +0000 UTC m=+3013.312463412" watchObservedRunningTime="2026-02-27 17:43:34.803557728 +0000 UTC m=+3013.319355315" Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.631560 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.631611 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.631644 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.632274 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.632324 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" gracePeriod=600 Feb 27 17:43:35 crc kubenswrapper[4708]: E0227 17:43:35.757822 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.794459 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" exitCode=0 Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.794500 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722"} Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.794535 4708 scope.go:117] "RemoveContainer" containerID="439742b868d272d84c1db3500bcc1b293c0585be3463e98cfb78d97fcb4e3465" Feb 27 17:43:35 crc kubenswrapper[4708]: I0227 17:43:35.795769 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:43:35 crc kubenswrapper[4708]: E0227 17:43:35.796210 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:43:40 crc kubenswrapper[4708]: I0227 17:43:40.750159 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:43:40 crc kubenswrapper[4708]: I0227 17:43:40.750785 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:43:41 crc kubenswrapper[4708]: I0227 17:43:41.819465 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="registry-server" probeResult="failure" output=< Feb 27 17:43:41 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:43:41 crc kubenswrapper[4708]: > Feb 27 17:43:50 crc kubenswrapper[4708]: I0227 17:43:50.228532 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:43:50 crc kubenswrapper[4708]: E0227 17:43:50.229337 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:43:50 crc kubenswrapper[4708]: I0227 17:43:50.816366 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:43:50 crc kubenswrapper[4708]: I0227 17:43:50.885918 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:43:51 crc kubenswrapper[4708]: I0227 17:43:51.750448 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmjkm"] Feb 27 17:43:51 crc kubenswrapper[4708]: I0227 17:43:51.984229 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cmjkm" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="registry-server" containerID="cri-o://08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e" gracePeriod=2 Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.610426 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.715447 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-utilities\") pod \"1a3aa83a-b917-43c1-9017-aa9db83770fe\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.715543 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2rhfc\" (UniqueName: \"kubernetes.io/projected/1a3aa83a-b917-43c1-9017-aa9db83770fe-kube-api-access-2rhfc\") pod \"1a3aa83a-b917-43c1-9017-aa9db83770fe\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.715569 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-catalog-content\") pod \"1a3aa83a-b917-43c1-9017-aa9db83770fe\" (UID: \"1a3aa83a-b917-43c1-9017-aa9db83770fe\") " Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.716234 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-utilities" (OuterVolumeSpecName: "utilities") pod "1a3aa83a-b917-43c1-9017-aa9db83770fe" (UID: "1a3aa83a-b917-43c1-9017-aa9db83770fe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.721766 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1a3aa83a-b917-43c1-9017-aa9db83770fe-kube-api-access-2rhfc" (OuterVolumeSpecName: "kube-api-access-2rhfc") pod "1a3aa83a-b917-43c1-9017-aa9db83770fe" (UID: "1a3aa83a-b917-43c1-9017-aa9db83770fe"). InnerVolumeSpecName "kube-api-access-2rhfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.818644 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2rhfc\" (UniqueName: \"kubernetes.io/projected/1a3aa83a-b917-43c1-9017-aa9db83770fe-kube-api-access-2rhfc\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.818687 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.847217 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1a3aa83a-b917-43c1-9017-aa9db83770fe" (UID: "1a3aa83a-b917-43c1-9017-aa9db83770fe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:43:52 crc kubenswrapper[4708]: I0227 17:43:52.920361 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1a3aa83a-b917-43c1-9017-aa9db83770fe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.002291 4708 generic.go:334] "Generic (PLEG): container finished" podID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerID="08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e" exitCode=0 Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.002332 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmjkm" event={"ID":"1a3aa83a-b917-43c1-9017-aa9db83770fe","Type":"ContainerDied","Data":"08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e"} Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.002361 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cmjkm" event={"ID":"1a3aa83a-b917-43c1-9017-aa9db83770fe","Type":"ContainerDied","Data":"4aa9f6cb8ff764b3003c59bf4947abce8f0c600f23fd3a4563b82ca441fab9d4"} Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.002380 4708 scope.go:117] "RemoveContainer" containerID="08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.002429 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cmjkm" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.034298 4708 scope.go:117] "RemoveContainer" containerID="3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.052966 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cmjkm"] Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.061331 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cmjkm"] Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.070760 4708 scope.go:117] "RemoveContainer" containerID="dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.125186 4708 scope.go:117] "RemoveContainer" containerID="08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e" Feb 27 17:43:53 crc kubenswrapper[4708]: E0227 17:43:53.125622 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e\": container with ID starting with 08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e not found: ID does not exist" containerID="08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.125687 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e"} err="failed to get container status \"08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e\": rpc error: code = NotFound desc = could not find container \"08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e\": container with ID starting with 08aa54723470ba3967b29c8ac5034f7b4253bfed8f45d3782856a939badbf30e not found: ID does not exist" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.125728 4708 scope.go:117] "RemoveContainer" containerID="3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a" Feb 27 17:43:53 crc kubenswrapper[4708]: E0227 17:43:53.126080 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a\": container with ID starting with 3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a not found: ID does not exist" containerID="3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.126115 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a"} err="failed to get container status \"3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a\": rpc error: code = NotFound desc = could not find container \"3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a\": container with ID starting with 3a8ee6428d905a45cd057c36df85fa7f12a7283fbdfc4fe6cd24d26d4a0b762a not found: ID does not exist" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.126139 4708 scope.go:117] "RemoveContainer" containerID="dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964" Feb 27 17:43:53 crc kubenswrapper[4708]: E0227 17:43:53.126459 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964\": container with ID starting with dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964 not found: ID does not exist" containerID="dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964" Feb 27 17:43:53 crc kubenswrapper[4708]: I0227 17:43:53.126505 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964"} err="failed to get container status \"dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964\": rpc error: code = NotFound desc = could not find container \"dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964\": container with ID starting with dc7a289d1914fcbc11007e072d26abbb1192b514f40b994f8f132f8424d5f964 not found: ID does not exist" Feb 27 17:43:54 crc kubenswrapper[4708]: I0227 17:43:54.247996 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" path="/var/lib/kubelet/pods/1a3aa83a-b917-43c1-9017-aa9db83770fe/volumes" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.165300 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536904-8bkhj"] Feb 27 17:44:00 crc kubenswrapper[4708]: E0227 17:44:00.166681 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f46db3bc-f11b-4634-9916-10c0094d3d5f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.166708 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f46db3bc-f11b-4634-9916-10c0094d3d5f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 17:44:00 crc kubenswrapper[4708]: E0227 17:44:00.166740 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" containerName="oc" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.166751 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" containerName="oc" Feb 27 17:44:00 crc kubenswrapper[4708]: E0227 17:44:00.166763 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="extract-content" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.166774 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="extract-content" Feb 27 17:44:00 crc kubenswrapper[4708]: E0227 17:44:00.166804 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="extract-utilities" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.166815 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="extract-utilities" Feb 27 17:44:00 crc kubenswrapper[4708]: E0227 17:44:00.166882 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="registry-server" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.166894 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="registry-server" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.167208 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f46db3bc-f11b-4634-9916-10c0094d3d5f" containerName="telemetry-edpm-deployment-openstack-edpm-ipam" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.167238 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1a3aa83a-b917-43c1-9017-aa9db83770fe" containerName="registry-server" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.167263 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" containerName="oc" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.168481 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.173908 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.174427 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.175469 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.177799 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-8bkhj"] Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.314858 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k45tf\" (UniqueName: \"kubernetes.io/projected/b1b82144-2072-420e-988a-bc5cea74f1ef-kube-api-access-k45tf\") pod \"auto-csr-approver-29536904-8bkhj\" (UID: \"b1b82144-2072-420e-988a-bc5cea74f1ef\") " pod="openshift-infra/auto-csr-approver-29536904-8bkhj" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.418132 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k45tf\" (UniqueName: \"kubernetes.io/projected/b1b82144-2072-420e-988a-bc5cea74f1ef-kube-api-access-k45tf\") pod \"auto-csr-approver-29536904-8bkhj\" (UID: \"b1b82144-2072-420e-988a-bc5cea74f1ef\") " pod="openshift-infra/auto-csr-approver-29536904-8bkhj" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.443570 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k45tf\" (UniqueName: \"kubernetes.io/projected/b1b82144-2072-420e-988a-bc5cea74f1ef-kube-api-access-k45tf\") pod \"auto-csr-approver-29536904-8bkhj\" (UID: \"b1b82144-2072-420e-988a-bc5cea74f1ef\") " pod="openshift-infra/auto-csr-approver-29536904-8bkhj" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.492706 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" Feb 27 17:44:00 crc kubenswrapper[4708]: I0227 17:44:00.900768 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-8bkhj"] Feb 27 17:44:01 crc kubenswrapper[4708]: I0227 17:44:01.098094 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" event={"ID":"b1b82144-2072-420e-988a-bc5cea74f1ef","Type":"ContainerStarted","Data":"676707a16c105da2354eff300c7a23c9ac375b52cec1abd26d48e94f0a24b54e"} Feb 27 17:44:01 crc kubenswrapper[4708]: E0227 17:44:01.975723 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:44:01 crc kubenswrapper[4708]: E0227 17:44:01.976938 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:44:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:44:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k45tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536904-8bkhj_openshift-infra(b1b82144-2072-420e-988a-bc5cea74f1ef): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:44:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:44:01 crc kubenswrapper[4708]: E0227 17:44:01.978494 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:44:02 crc kubenswrapper[4708]: E0227 17:44:02.112862 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:44:04 crc kubenswrapper[4708]: I0227 17:44:04.228059 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:44:04 crc kubenswrapper[4708]: E0227 17:44:04.228623 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:44:15 crc kubenswrapper[4708]: I0227 17:44:15.229629 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:44:15 crc kubenswrapper[4708]: E0227 17:44:15.230782 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:44:15 crc kubenswrapper[4708]: E0227 17:44:15.372259 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:44:15 crc kubenswrapper[4708]: E0227 17:44:15.373063 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:44:15 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:44:15 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k45tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536904-8bkhj_openshift-infra(b1b82144-2072-420e-988a-bc5cea74f1ef): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:44:15 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:44:15 crc kubenswrapper[4708]: E0227 17:44:15.374493 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:44:26 crc kubenswrapper[4708]: I0227 17:44:26.228711 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:44:26 crc kubenswrapper[4708]: E0227 17:44:26.229606 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:44:27 crc kubenswrapper[4708]: E0227 17:44:27.231705 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:44:41 crc kubenswrapper[4708]: I0227 17:44:41.229818 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:44:41 crc kubenswrapper[4708]: E0227 17:44:41.231026 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:44:42 crc kubenswrapper[4708]: E0227 17:44:42.315800 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:44:42 crc kubenswrapper[4708]: E0227 17:44:42.316445 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:44:42 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:44:42 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-k45tf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536904-8bkhj_openshift-infra(b1b82144-2072-420e-988a-bc5cea74f1ef): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:44:42 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:44:42 crc kubenswrapper[4708]: E0227 17:44:42.317732 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.308064 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-snzg7"] Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.312653 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.324283 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snzg7"] Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.507380 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj27k\" (UniqueName: \"kubernetes.io/projected/406e3a13-ea1e-4cce-9503-79bd676f0160-kube-api-access-wj27k\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.507480 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-catalog-content\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.507812 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-utilities\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.609749 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wj27k\" (UniqueName: \"kubernetes.io/projected/406e3a13-ea1e-4cce-9503-79bd676f0160-kube-api-access-wj27k\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.609791 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-catalog-content\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.609896 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-utilities\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.610316 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-utilities\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.610758 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-catalog-content\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.631815 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wj27k\" (UniqueName: \"kubernetes.io/projected/406e3a13-ea1e-4cce-9503-79bd676f0160-kube-api-access-wj27k\") pod \"certified-operators-snzg7\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:44 crc kubenswrapper[4708]: I0227 17:44:44.662757 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:44:45 crc kubenswrapper[4708]: I0227 17:44:45.212777 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-snzg7"] Feb 27 17:44:45 crc kubenswrapper[4708]: I0227 17:44:45.669860 4708 generic.go:334] "Generic (PLEG): container finished" podID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerID="caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4" exitCode=0 Feb 27 17:44:45 crc kubenswrapper[4708]: I0227 17:44:45.669930 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snzg7" event={"ID":"406e3a13-ea1e-4cce-9503-79bd676f0160","Type":"ContainerDied","Data":"caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4"} Feb 27 17:44:45 crc kubenswrapper[4708]: I0227 17:44:45.670196 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snzg7" event={"ID":"406e3a13-ea1e-4cce-9503-79bd676f0160","Type":"ContainerStarted","Data":"bf85b29eec16fdf95d38c451beaa4c4f61cad3909e44c402d6dd738041d4853f"} Feb 27 17:44:46 crc kubenswrapper[4708]: E0227 17:44:46.281230 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 17:44:46 crc kubenswrapper[4708]: E0227 17:44:46.281453 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj27k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-snzg7_openshift-marketplace(406e3a13-ea1e-4cce-9503-79bd676f0160): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:44:46 crc kubenswrapper[4708]: E0227 17:44:46.282723 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-snzg7" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" Feb 27 17:44:46 crc kubenswrapper[4708]: E0227 17:44:46.693382 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-snzg7" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" Feb 27 17:44:55 crc kubenswrapper[4708]: E0227 17:44:55.231933 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:44:56 crc kubenswrapper[4708]: I0227 17:44:56.229764 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:44:56 crc kubenswrapper[4708]: E0227 17:44:56.230938 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.200740 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr"] Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.203943 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.208012 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.208418 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.214062 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr"] Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.227378 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5274b8a-9803-4096-a176-01de86c631f3-config-volume\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.227575 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvrts\" (UniqueName: \"kubernetes.io/projected/a5274b8a-9803-4096-a176-01de86c631f3-kube-api-access-xvrts\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.227748 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5274b8a-9803-4096-a176-01de86c631f3-secret-volume\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.329972 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvrts\" (UniqueName: \"kubernetes.io/projected/a5274b8a-9803-4096-a176-01de86c631f3-kube-api-access-xvrts\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.330153 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5274b8a-9803-4096-a176-01de86c631f3-secret-volume\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.330360 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5274b8a-9803-4096-a176-01de86c631f3-config-volume\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.331740 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5274b8a-9803-4096-a176-01de86c631f3-config-volume\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.338356 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5274b8a-9803-4096-a176-01de86c631f3-secret-volume\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.354172 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvrts\" (UniqueName: \"kubernetes.io/projected/a5274b8a-9803-4096-a176-01de86c631f3-kube-api-access-xvrts\") pod \"collect-profiles-29536905-88wqr\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:00 crc kubenswrapper[4708]: I0227 17:45:00.531162 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:01 crc kubenswrapper[4708]: I0227 17:45:01.073460 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr"] Feb 27 17:45:01 crc kubenswrapper[4708]: I0227 17:45:01.865335 4708 generic.go:334] "Generic (PLEG): container finished" podID="a5274b8a-9803-4096-a176-01de86c631f3" containerID="f95088672443750716d6d84d43246519db12cd69eda5db689aa522b622b2fe7f" exitCode=0 Feb 27 17:45:01 crc kubenswrapper[4708]: I0227 17:45:01.865381 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" event={"ID":"a5274b8a-9803-4096-a176-01de86c631f3","Type":"ContainerDied","Data":"f95088672443750716d6d84d43246519db12cd69eda5db689aa522b622b2fe7f"} Feb 27 17:45:01 crc kubenswrapper[4708]: I0227 17:45:01.865408 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" event={"ID":"a5274b8a-9803-4096-a176-01de86c631f3","Type":"ContainerStarted","Data":"fbff067211b4402375f2735436e7195e606ad15b3dc3f948a3b0772988930643"} Feb 27 17:45:02 crc kubenswrapper[4708]: I0227 17:45:02.242706 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:45:02 crc kubenswrapper[4708]: E0227 17:45:02.951209 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 17:45:02 crc kubenswrapper[4708]: E0227 17:45:02.954888 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wj27k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-snzg7_openshift-marketplace(406e3a13-ea1e-4cce-9503-79bd676f0160): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:45:02 crc kubenswrapper[4708]: E0227 17:45:02.956413 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-snzg7" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.532632 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.641971 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5274b8a-9803-4096-a176-01de86c631f3-secret-volume\") pod \"a5274b8a-9803-4096-a176-01de86c631f3\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.642311 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5274b8a-9803-4096-a176-01de86c631f3-config-volume\") pod \"a5274b8a-9803-4096-a176-01de86c631f3\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.642387 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvrts\" (UniqueName: \"kubernetes.io/projected/a5274b8a-9803-4096-a176-01de86c631f3-kube-api-access-xvrts\") pod \"a5274b8a-9803-4096-a176-01de86c631f3\" (UID: \"a5274b8a-9803-4096-a176-01de86c631f3\") " Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.643202 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a5274b8a-9803-4096-a176-01de86c631f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "a5274b8a-9803-4096-a176-01de86c631f3" (UID: "a5274b8a-9803-4096-a176-01de86c631f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.655816 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5274b8a-9803-4096-a176-01de86c631f3-kube-api-access-xvrts" (OuterVolumeSpecName: "kube-api-access-xvrts") pod "a5274b8a-9803-4096-a176-01de86c631f3" (UID: "a5274b8a-9803-4096-a176-01de86c631f3"). InnerVolumeSpecName "kube-api-access-xvrts". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.658299 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5274b8a-9803-4096-a176-01de86c631f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a5274b8a-9803-4096-a176-01de86c631f3" (UID: "a5274b8a-9803-4096-a176-01de86c631f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.745738 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a5274b8a-9803-4096-a176-01de86c631f3-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.745817 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5274b8a-9803-4096-a176-01de86c631f3-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.745880 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvrts\" (UniqueName: \"kubernetes.io/projected/a5274b8a-9803-4096-a176-01de86c631f3-kube-api-access-xvrts\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.892974 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" event={"ID":"a5274b8a-9803-4096-a176-01de86c631f3","Type":"ContainerDied","Data":"fbff067211b4402375f2735436e7195e606ad15b3dc3f948a3b0772988930643"} Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.893039 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fbff067211b4402375f2735436e7195e606ad15b3dc3f948a3b0772988930643" Feb 27 17:45:03 crc kubenswrapper[4708]: I0227 17:45:03.893079 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr" Feb 27 17:45:04 crc kubenswrapper[4708]: I0227 17:45:04.636422 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww"] Feb 27 17:45:04 crc kubenswrapper[4708]: I0227 17:45:04.653546 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536860-2z4ww"] Feb 27 17:45:06 crc kubenswrapper[4708]: I0227 17:45:06.257352 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e099232-71ed-4051-9c36-077664c3cd78" path="/var/lib/kubelet/pods/2e099232-71ed-4051-9c36-077664c3cd78/volumes" Feb 27 17:45:08 crc kubenswrapper[4708]: E0227 17:45:08.233186 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:45:09 crc kubenswrapper[4708]: I0227 17:45:09.228912 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:45:09 crc kubenswrapper[4708]: E0227 17:45:09.229506 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:45:14 crc kubenswrapper[4708]: E0227 17:45:14.232519 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-snzg7" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" Feb 27 17:45:19 crc kubenswrapper[4708]: E0227 17:45:19.229797 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" Feb 27 17:45:21 crc kubenswrapper[4708]: I0227 17:45:21.228378 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:45:21 crc kubenswrapper[4708]: E0227 17:45:21.228955 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:45:28 crc kubenswrapper[4708]: I0227 17:45:28.176986 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snzg7" event={"ID":"406e3a13-ea1e-4cce-9503-79bd676f0160","Type":"ContainerStarted","Data":"3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0"} Feb 27 17:45:29 crc kubenswrapper[4708]: I0227 17:45:29.191249 4708 generic.go:334] "Generic (PLEG): container finished" podID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerID="3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0" exitCode=0 Feb 27 17:45:29 crc kubenswrapper[4708]: I0227 17:45:29.191291 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snzg7" event={"ID":"406e3a13-ea1e-4cce-9503-79bd676f0160","Type":"ContainerDied","Data":"3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0"} Feb 27 17:45:30 crc kubenswrapper[4708]: I0227 17:45:30.201131 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snzg7" event={"ID":"406e3a13-ea1e-4cce-9503-79bd676f0160","Type":"ContainerStarted","Data":"9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789"} Feb 27 17:45:30 crc kubenswrapper[4708]: I0227 17:45:30.228690 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-snzg7" podStartSLOduration=2.27612716 podStartE2EDuration="46.228673385s" podCreationTimestamp="2026-02-27 17:44:44 +0000 UTC" firstStartedPulling="2026-02-27 17:44:45.67318116 +0000 UTC m=+3084.188978767" lastFinishedPulling="2026-02-27 17:45:29.625727395 +0000 UTC m=+3128.141524992" observedRunningTime="2026-02-27 17:45:30.222733088 +0000 UTC m=+3128.738530675" watchObservedRunningTime="2026-02-27 17:45:30.228673385 +0000 UTC m=+3128.744470972" Feb 27 17:45:32 crc kubenswrapper[4708]: I0227 17:45:32.225991 4708 generic.go:334] "Generic (PLEG): container finished" podID="b1b82144-2072-420e-988a-bc5cea74f1ef" containerID="6c3c8db7ecf272f53b7e228182156a26205254347b1853e5f0dd80de23a59d90" exitCode=0 Feb 27 17:45:32 crc kubenswrapper[4708]: I0227 17:45:32.226111 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" event={"ID":"b1b82144-2072-420e-988a-bc5cea74f1ef","Type":"ContainerDied","Data":"6c3c8db7ecf272f53b7e228182156a26205254347b1853e5f0dd80de23a59d90"} Feb 27 17:45:33 crc kubenswrapper[4708]: I0227 17:45:33.695023 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" Feb 27 17:45:33 crc kubenswrapper[4708]: I0227 17:45:33.858102 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k45tf\" (UniqueName: \"kubernetes.io/projected/b1b82144-2072-420e-988a-bc5cea74f1ef-kube-api-access-k45tf\") pod \"b1b82144-2072-420e-988a-bc5cea74f1ef\" (UID: \"b1b82144-2072-420e-988a-bc5cea74f1ef\") " Feb 27 17:45:33 crc kubenswrapper[4708]: I0227 17:45:33.864098 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1b82144-2072-420e-988a-bc5cea74f1ef-kube-api-access-k45tf" (OuterVolumeSpecName: "kube-api-access-k45tf") pod "b1b82144-2072-420e-988a-bc5cea74f1ef" (UID: "b1b82144-2072-420e-988a-bc5cea74f1ef"). InnerVolumeSpecName "kube-api-access-k45tf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:45:33 crc kubenswrapper[4708]: I0227 17:45:33.960904 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k45tf\" (UniqueName: \"kubernetes.io/projected/b1b82144-2072-420e-988a-bc5cea74f1ef-kube-api-access-k45tf\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.253800 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" event={"ID":"b1b82144-2072-420e-988a-bc5cea74f1ef","Type":"ContainerDied","Data":"676707a16c105da2354eff300c7a23c9ac375b52cec1abd26d48e94f0a24b54e"} Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.253836 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="676707a16c105da2354eff300c7a23c9ac375b52cec1abd26d48e94f0a24b54e" Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.253839 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536904-8bkhj" Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.662897 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.662956 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.719571 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.802627 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-dk72j"] Feb 27 17:45:34 crc kubenswrapper[4708]: I0227 17:45:34.812163 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536898-dk72j"] Feb 27 17:45:35 crc kubenswrapper[4708]: I0227 17:45:35.343813 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:45:35 crc kubenswrapper[4708]: I0227 17:45:35.400238 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snzg7"] Feb 27 17:45:36 crc kubenswrapper[4708]: I0227 17:45:36.228339 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:45:36 crc kubenswrapper[4708]: E0227 17:45:36.229012 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:45:36 crc kubenswrapper[4708]: I0227 17:45:36.245141 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1b916670-1fc9-40b4-b106-99c7de6b151a" path="/var/lib/kubelet/pods/1b916670-1fc9-40b4-b106-99c7de6b151a/volumes" Feb 27 17:45:37 crc kubenswrapper[4708]: I0227 17:45:37.295170 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-snzg7" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="registry-server" containerID="cri-o://9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789" gracePeriod=2 Feb 27 17:45:37 crc kubenswrapper[4708]: I0227 17:45:37.998731 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.164531 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-catalog-content\") pod \"406e3a13-ea1e-4cce-9503-79bd676f0160\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.164710 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-utilities\") pod \"406e3a13-ea1e-4cce-9503-79bd676f0160\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.164828 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj27k\" (UniqueName: \"kubernetes.io/projected/406e3a13-ea1e-4cce-9503-79bd676f0160-kube-api-access-wj27k\") pod \"406e3a13-ea1e-4cce-9503-79bd676f0160\" (UID: \"406e3a13-ea1e-4cce-9503-79bd676f0160\") " Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.165717 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-utilities" (OuterVolumeSpecName: "utilities") pod "406e3a13-ea1e-4cce-9503-79bd676f0160" (UID: "406e3a13-ea1e-4cce-9503-79bd676f0160"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.169591 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406e3a13-ea1e-4cce-9503-79bd676f0160-kube-api-access-wj27k" (OuterVolumeSpecName: "kube-api-access-wj27k") pod "406e3a13-ea1e-4cce-9503-79bd676f0160" (UID: "406e3a13-ea1e-4cce-9503-79bd676f0160"). InnerVolumeSpecName "kube-api-access-wj27k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.233504 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "406e3a13-ea1e-4cce-9503-79bd676f0160" (UID: "406e3a13-ea1e-4cce-9503-79bd676f0160"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.267218 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.267334 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/406e3a13-ea1e-4cce-9503-79bd676f0160-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.267390 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wj27k\" (UniqueName: \"kubernetes.io/projected/406e3a13-ea1e-4cce-9503-79bd676f0160-kube-api-access-wj27k\") on node \"crc\" DevicePath \"\"" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.306464 4708 generic.go:334] "Generic (PLEG): container finished" podID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerID="9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789" exitCode=0 Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.306529 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snzg7" event={"ID":"406e3a13-ea1e-4cce-9503-79bd676f0160","Type":"ContainerDied","Data":"9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789"} Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.306544 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-snzg7" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.306568 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-snzg7" event={"ID":"406e3a13-ea1e-4cce-9503-79bd676f0160","Type":"ContainerDied","Data":"bf85b29eec16fdf95d38c451beaa4c4f61cad3909e44c402d6dd738041d4853f"} Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.306597 4708 scope.go:117] "RemoveContainer" containerID="9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.335154 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-snzg7"] Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.335951 4708 scope.go:117] "RemoveContainer" containerID="3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.343647 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-snzg7"] Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.376013 4708 scope.go:117] "RemoveContainer" containerID="caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.417816 4708 scope.go:117] "RemoveContainer" containerID="9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789" Feb 27 17:45:38 crc kubenswrapper[4708]: E0227 17:45:38.418398 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789\": container with ID starting with 9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789 not found: ID does not exist" containerID="9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.418430 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789"} err="failed to get container status \"9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789\": rpc error: code = NotFound desc = could not find container \"9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789\": container with ID starting with 9b5074e709c7be81629c08d4f1b05c1c6f5a3f2161548a554cbfa2570c431789 not found: ID does not exist" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.418454 4708 scope.go:117] "RemoveContainer" containerID="3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0" Feb 27 17:45:38 crc kubenswrapper[4708]: E0227 17:45:38.418801 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0\": container with ID starting with 3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0 not found: ID does not exist" containerID="3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.418824 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0"} err="failed to get container status \"3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0\": rpc error: code = NotFound desc = could not find container \"3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0\": container with ID starting with 3099eba3d71f67ef8f4d05fa1f2cda883fdef27925d3be983e6b3720c1e1f4e0 not found: ID does not exist" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.418919 4708 scope.go:117] "RemoveContainer" containerID="caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4" Feb 27 17:45:38 crc kubenswrapper[4708]: E0227 17:45:38.419299 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4\": container with ID starting with caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4 not found: ID does not exist" containerID="caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4" Feb 27 17:45:38 crc kubenswrapper[4708]: I0227 17:45:38.419337 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4"} err="failed to get container status \"caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4\": rpc error: code = NotFound desc = could not find container \"caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4\": container with ID starting with caee2211a06a55e66da6849d20a0476d70f790b102b97779d79ea884c0aa44f4 not found: ID does not exist" Feb 27 17:45:40 crc kubenswrapper[4708]: I0227 17:45:40.244748 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" path="/var/lib/kubelet/pods/406e3a13-ea1e-4cce-9503-79bd676f0160/volumes" Feb 27 17:45:48 crc kubenswrapper[4708]: I0227 17:45:48.228356 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:45:48 crc kubenswrapper[4708]: E0227 17:45:48.229238 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.167744 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536906-6hkdf"] Feb 27 17:46:00 crc kubenswrapper[4708]: E0227 17:46:00.169206 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" containerName="oc" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169234 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" containerName="oc" Feb 27 17:46:00 crc kubenswrapper[4708]: E0227 17:46:00.169288 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="registry-server" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169302 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="registry-server" Feb 27 17:46:00 crc kubenswrapper[4708]: E0227 17:46:00.169343 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5274b8a-9803-4096-a176-01de86c631f3" containerName="collect-profiles" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169355 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5274b8a-9803-4096-a176-01de86c631f3" containerName="collect-profiles" Feb 27 17:46:00 crc kubenswrapper[4708]: E0227 17:46:00.169383 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="extract-content" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169395 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="extract-content" Feb 27 17:46:00 crc kubenswrapper[4708]: E0227 17:46:00.169418 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="extract-utilities" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169431 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="extract-utilities" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169800 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="406e3a13-ea1e-4cce-9503-79bd676f0160" containerName="registry-server" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169875 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5274b8a-9803-4096-a176-01de86c631f3" containerName="collect-profiles" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.169934 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" containerName="oc" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.171355 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-6hkdf" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.174540 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.174838 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.174970 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.181091 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-6hkdf"] Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.269343 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bq74k\" (UniqueName: \"kubernetes.io/projected/7e26151a-c427-4ef6-b466-42266139ce98-kube-api-access-bq74k\") pod \"auto-csr-approver-29536906-6hkdf\" (UID: \"7e26151a-c427-4ef6-b466-42266139ce98\") " pod="openshift-infra/auto-csr-approver-29536906-6hkdf" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.374152 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bq74k\" (UniqueName: \"kubernetes.io/projected/7e26151a-c427-4ef6-b466-42266139ce98-kube-api-access-bq74k\") pod \"auto-csr-approver-29536906-6hkdf\" (UID: \"7e26151a-c427-4ef6-b466-42266139ce98\") " pod="openshift-infra/auto-csr-approver-29536906-6hkdf" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.389393 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9jvtx"] Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.395296 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.402333 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bq74k\" (UniqueName: \"kubernetes.io/projected/7e26151a-c427-4ef6-b466-42266139ce98-kube-api-access-bq74k\") pod \"auto-csr-approver-29536906-6hkdf\" (UID: \"7e26151a-c427-4ef6-b466-42266139ce98\") " pod="openshift-infra/auto-csr-approver-29536906-6hkdf" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.439506 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jvtx"] Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.477001 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-catalog-content\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.477240 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4tn5\" (UniqueName: \"kubernetes.io/projected/41b12347-557a-489b-afaf-41c6903fa9c7-kube-api-access-j4tn5\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.477530 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-utilities\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.508451 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-6hkdf" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.578909 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4tn5\" (UniqueName: \"kubernetes.io/projected/41b12347-557a-489b-afaf-41c6903fa9c7-kube-api-access-j4tn5\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.579227 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-utilities\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.579369 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-catalog-content\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.579586 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-utilities\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.579940 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-catalog-content\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.609589 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4tn5\" (UniqueName: \"kubernetes.io/projected/41b12347-557a-489b-afaf-41c6903fa9c7-kube-api-access-j4tn5\") pod \"community-operators-9jvtx\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.787090 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:46:00 crc kubenswrapper[4708]: I0227 17:46:00.996910 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-6hkdf"] Feb 27 17:46:01 crc kubenswrapper[4708]: I0227 17:46:01.258659 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9jvtx"] Feb 27 17:46:01 crc kubenswrapper[4708]: W0227 17:46:01.337581 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41b12347_557a_489b_afaf_41c6903fa9c7.slice/crio-5641bdf144889404a9ac4f197f55933c21fcda7b7fb3e1bb2fea2446ea64ad51 WatchSource:0}: Error finding container 5641bdf144889404a9ac4f197f55933c21fcda7b7fb3e1bb2fea2446ea64ad51: Status 404 returned error can't find the container with id 5641bdf144889404a9ac4f197f55933c21fcda7b7fb3e1bb2fea2446ea64ad51 Feb 27 17:46:01 crc kubenswrapper[4708]: I0227 17:46:01.606820 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536906-6hkdf" event={"ID":"7e26151a-c427-4ef6-b466-42266139ce98","Type":"ContainerStarted","Data":"6197bc6b825cb5cbf3b0938b036bda700cada9a6ec663a71caa96531be8397ca"} Feb 27 17:46:01 crc kubenswrapper[4708]: I0227 17:46:01.608683 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerStarted","Data":"6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2"} Feb 27 17:46:01 crc kubenswrapper[4708]: I0227 17:46:01.608726 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerStarted","Data":"5641bdf144889404a9ac4f197f55933c21fcda7b7fb3e1bb2fea2446ea64ad51"} Feb 27 17:46:02 crc kubenswrapper[4708]: I0227 17:46:02.235126 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:46:02 crc kubenswrapper[4708]: E0227 17:46:02.235712 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:46:02 crc kubenswrapper[4708]: I0227 17:46:02.327189 4708 scope.go:117] "RemoveContainer" containerID="7c068807f0fd39e0a24b085993460f73a7625628b33ec26364f4756cb56391d5" Feb 27 17:46:02 crc kubenswrapper[4708]: I0227 17:46:02.622489 4708 generic.go:334] "Generic (PLEG): container finished" podID="41b12347-557a-489b-afaf-41c6903fa9c7" containerID="6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2" exitCode=0 Feb 27 17:46:02 crc kubenswrapper[4708]: I0227 17:46:02.622538 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerDied","Data":"6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2"} Feb 27 17:46:03 crc kubenswrapper[4708]: E0227 17:46:03.618357 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 17:46:03 crc kubenswrapper[4708]: E0227 17:46:03.618873 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4tn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9jvtx_openshift-marketplace(41b12347-557a-489b-afaf-41c6903fa9c7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:46:03 crc kubenswrapper[4708]: E0227 17:46:03.620404 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-9jvtx" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" Feb 27 17:46:03 crc kubenswrapper[4708]: E0227 17:46:03.637233 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9jvtx" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.644086 4708 generic.go:334] "Generic (PLEG): container finished" podID="7e26151a-c427-4ef6-b466-42266139ce98" containerID="3949841a7f0a843906dc0cdd5a221d62d142098aedc8494c2c8570d5cb3a0e0b" exitCode=0 Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.644132 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536906-6hkdf" event={"ID":"7e26151a-c427-4ef6-b466-42266139ce98","Type":"ContainerDied","Data":"3949841a7f0a843906dc0cdd5a221d62d142098aedc8494c2c8570d5cb3a0e0b"} Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.771668 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lflwk"] Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.774216 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.806470 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lflwk"] Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.874703 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q82zw\" (UniqueName: \"kubernetes.io/projected/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-kube-api-access-q82zw\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.874779 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-utilities\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.874885 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-catalog-content\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.976800 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q82zw\" (UniqueName: \"kubernetes.io/projected/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-kube-api-access-q82zw\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.977097 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-utilities\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.977224 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-catalog-content\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.977539 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-utilities\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.977666 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-catalog-content\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:04 crc kubenswrapper[4708]: I0227 17:46:04.999014 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q82zw\" (UniqueName: \"kubernetes.io/projected/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-kube-api-access-q82zw\") pod \"redhat-marketplace-lflwk\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:05 crc kubenswrapper[4708]: I0227 17:46:05.123226 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:05 crc kubenswrapper[4708]: I0227 17:46:05.592756 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lflwk"] Feb 27 17:46:05 crc kubenswrapper[4708]: I0227 17:46:05.654174 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lflwk" event={"ID":"98f5575d-6149-4315-9fb5-e8f0cfc2bc37","Type":"ContainerStarted","Data":"453ed62744e5af1195ed40e927fafcefa7f96ae06a8d9c85598ecd0197ac2a60"} Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.129740 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-6hkdf" Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.199461 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bq74k\" (UniqueName: \"kubernetes.io/projected/7e26151a-c427-4ef6-b466-42266139ce98-kube-api-access-bq74k\") pod \"7e26151a-c427-4ef6-b466-42266139ce98\" (UID: \"7e26151a-c427-4ef6-b466-42266139ce98\") " Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.207931 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e26151a-c427-4ef6-b466-42266139ce98-kube-api-access-bq74k" (OuterVolumeSpecName: "kube-api-access-bq74k") pod "7e26151a-c427-4ef6-b466-42266139ce98" (UID: "7e26151a-c427-4ef6-b466-42266139ce98"). InnerVolumeSpecName "kube-api-access-bq74k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.301449 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bq74k\" (UniqueName: \"kubernetes.io/projected/7e26151a-c427-4ef6-b466-42266139ce98-kube-api-access-bq74k\") on node \"crc\" DevicePath \"\"" Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.666585 4708 generic.go:334] "Generic (PLEG): container finished" podID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerID="d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4" exitCode=0 Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.666667 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lflwk" event={"ID":"98f5575d-6149-4315-9fb5-e8f0cfc2bc37","Type":"ContainerDied","Data":"d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4"} Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.669709 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536906-6hkdf" event={"ID":"7e26151a-c427-4ef6-b466-42266139ce98","Type":"ContainerDied","Data":"6197bc6b825cb5cbf3b0938b036bda700cada9a6ec663a71caa96531be8397ca"} Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.670125 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6197bc6b825cb5cbf3b0938b036bda700cada9a6ec663a71caa96531be8397ca" Feb 27 17:46:06 crc kubenswrapper[4708]: I0227 17:46:06.669801 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536906-6hkdf" Feb 27 17:46:07 crc kubenswrapper[4708]: I0227 17:46:07.234839 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-ctdk8"] Feb 27 17:46:07 crc kubenswrapper[4708]: I0227 17:46:07.246512 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536900-ctdk8"] Feb 27 17:46:08 crc kubenswrapper[4708]: I0227 17:46:08.245232 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c8a67245-e6f1-4c91-bf78-af3b7d4d77c0" path="/var/lib/kubelet/pods/c8a67245-e6f1-4c91-bf78-af3b7d4d77c0/volumes" Feb 27 17:46:08 crc kubenswrapper[4708]: I0227 17:46:08.705460 4708 generic.go:334] "Generic (PLEG): container finished" podID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerID="b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024" exitCode=0 Feb 27 17:46:08 crc kubenswrapper[4708]: I0227 17:46:08.705567 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lflwk" event={"ID":"98f5575d-6149-4315-9fb5-e8f0cfc2bc37","Type":"ContainerDied","Data":"b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024"} Feb 27 17:46:09 crc kubenswrapper[4708]: I0227 17:46:09.723459 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lflwk" event={"ID":"98f5575d-6149-4315-9fb5-e8f0cfc2bc37","Type":"ContainerStarted","Data":"25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5"} Feb 27 17:46:09 crc kubenswrapper[4708]: I0227 17:46:09.748372 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lflwk" podStartSLOduration=3.270102766 podStartE2EDuration="5.748351637s" podCreationTimestamp="2026-02-27 17:46:04 +0000 UTC" firstStartedPulling="2026-02-27 17:46:06.669267056 +0000 UTC m=+3165.185064653" lastFinishedPulling="2026-02-27 17:46:09.147515937 +0000 UTC m=+3167.663313524" observedRunningTime="2026-02-27 17:46:09.744498009 +0000 UTC m=+3168.260295596" watchObservedRunningTime="2026-02-27 17:46:09.748351637 +0000 UTC m=+3168.264149224" Feb 27 17:46:14 crc kubenswrapper[4708]: I0227 17:46:14.230219 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:46:14 crc kubenswrapper[4708]: E0227 17:46:14.231533 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:46:15 crc kubenswrapper[4708]: I0227 17:46:15.123776 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:15 crc kubenswrapper[4708]: I0227 17:46:15.124178 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:15 crc kubenswrapper[4708]: I0227 17:46:15.214502 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:15 crc kubenswrapper[4708]: I0227 17:46:15.880198 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:15 crc kubenswrapper[4708]: I0227 17:46:15.949994 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lflwk"] Feb 27 17:46:17 crc kubenswrapper[4708]: E0227 17:46:17.777370 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 17:46:17 crc kubenswrapper[4708]: E0227 17:46:17.777538 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j4tn5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-9jvtx_openshift-marketplace(41b12347-557a-489b-afaf-41c6903fa9c7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:46:17 crc kubenswrapper[4708]: E0227 17:46:17.778742 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-9jvtx" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" Feb 27 17:46:17 crc kubenswrapper[4708]: I0227 17:46:17.845519 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lflwk" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="registry-server" containerID="cri-o://25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5" gracePeriod=2 Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.464558 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.497369 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-utilities\") pod \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.497546 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-catalog-content\") pod \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.497716 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q82zw\" (UniqueName: \"kubernetes.io/projected/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-kube-api-access-q82zw\") pod \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\" (UID: \"98f5575d-6149-4315-9fb5-e8f0cfc2bc37\") " Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.498539 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-utilities" (OuterVolumeSpecName: "utilities") pod "98f5575d-6149-4315-9fb5-e8f0cfc2bc37" (UID: "98f5575d-6149-4315-9fb5-e8f0cfc2bc37"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.500012 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.505415 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-kube-api-access-q82zw" (OuterVolumeSpecName: "kube-api-access-q82zw") pod "98f5575d-6149-4315-9fb5-e8f0cfc2bc37" (UID: "98f5575d-6149-4315-9fb5-e8f0cfc2bc37"). InnerVolumeSpecName "kube-api-access-q82zw". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.601951 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q82zw\" (UniqueName: \"kubernetes.io/projected/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-kube-api-access-q82zw\") on node \"crc\" DevicePath \"\"" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.775108 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98f5575d-6149-4315-9fb5-e8f0cfc2bc37" (UID: "98f5575d-6149-4315-9fb5-e8f0cfc2bc37"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.806148 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98f5575d-6149-4315-9fb5-e8f0cfc2bc37-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.856203 4708 generic.go:334] "Generic (PLEG): container finished" podID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerID="25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5" exitCode=0 Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.856246 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lflwk" event={"ID":"98f5575d-6149-4315-9fb5-e8f0cfc2bc37","Type":"ContainerDied","Data":"25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5"} Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.856271 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lflwk" event={"ID":"98f5575d-6149-4315-9fb5-e8f0cfc2bc37","Type":"ContainerDied","Data":"453ed62744e5af1195ed40e927fafcefa7f96ae06a8d9c85598ecd0197ac2a60"} Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.856287 4708 scope.go:117] "RemoveContainer" containerID="25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.856298 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lflwk" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.896138 4708 scope.go:117] "RemoveContainer" containerID="b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.926578 4708 scope.go:117] "RemoveContainer" containerID="d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.934759 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lflwk"] Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.948536 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lflwk"] Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.985595 4708 scope.go:117] "RemoveContainer" containerID="25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5" Feb 27 17:46:18 crc kubenswrapper[4708]: E0227 17:46:18.986096 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5\": container with ID starting with 25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5 not found: ID does not exist" containerID="25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.986161 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5"} err="failed to get container status \"25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5\": rpc error: code = NotFound desc = could not find container \"25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5\": container with ID starting with 25432a20616dde5c161cfc48b913d2df460d2338a6286a0ae62946a8eab078c5 not found: ID does not exist" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.986197 4708 scope.go:117] "RemoveContainer" containerID="b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024" Feb 27 17:46:18 crc kubenswrapper[4708]: E0227 17:46:18.986984 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024\": container with ID starting with b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024 not found: ID does not exist" containerID="b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.987046 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024"} err="failed to get container status \"b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024\": rpc error: code = NotFound desc = could not find container \"b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024\": container with ID starting with b89f77c5999e25ef8916ef4df4f3faa2fee8138af39caa5e27357ecd32c5c024 not found: ID does not exist" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.987086 4708 scope.go:117] "RemoveContainer" containerID="d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4" Feb 27 17:46:18 crc kubenswrapper[4708]: E0227 17:46:18.987568 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4\": container with ID starting with d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4 not found: ID does not exist" containerID="d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4" Feb 27 17:46:18 crc kubenswrapper[4708]: I0227 17:46:18.987616 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4"} err="failed to get container status \"d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4\": rpc error: code = NotFound desc = could not find container \"d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4\": container with ID starting with d8ea6f93e9f355367ae0378e175554f44e47d5848b4f11b46e9e0e346e6866c4 not found: ID does not exist" Feb 27 17:46:20 crc kubenswrapper[4708]: I0227 17:46:20.238834 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" path="/var/lib/kubelet/pods/98f5575d-6149-4315-9fb5-e8f0cfc2bc37/volumes" Feb 27 17:46:29 crc kubenswrapper[4708]: I0227 17:46:29.230589 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:46:29 crc kubenswrapper[4708]: E0227 17:46:29.231334 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:46:32 crc kubenswrapper[4708]: E0227 17:46:32.257236 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-9jvtx" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" Feb 27 17:46:40 crc kubenswrapper[4708]: I0227 17:46:40.229395 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:46:40 crc kubenswrapper[4708]: E0227 17:46:40.230588 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:46:49 crc kubenswrapper[4708]: I0227 17:46:49.226921 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerStarted","Data":"a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1"} Feb 27 17:46:50 crc kubenswrapper[4708]: I0227 17:46:50.237291 4708 generic.go:334] "Generic (PLEG): container finished" podID="41b12347-557a-489b-afaf-41c6903fa9c7" containerID="a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1" exitCode=0 Feb 27 17:46:50 crc kubenswrapper[4708]: I0227 17:46:50.242075 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerDied","Data":"a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1"} Feb 27 17:46:51 crc kubenswrapper[4708]: I0227 17:46:51.229548 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:46:51 crc kubenswrapper[4708]: E0227 17:46:51.230586 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:46:51 crc kubenswrapper[4708]: I0227 17:46:51.252560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerStarted","Data":"46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7"} Feb 27 17:46:51 crc kubenswrapper[4708]: I0227 17:46:51.283219 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9jvtx" podStartSLOduration=2.115001889 podStartE2EDuration="51.283193288s" podCreationTimestamp="2026-02-27 17:46:00 +0000 UTC" firstStartedPulling="2026-02-27 17:46:01.610676254 +0000 UTC m=+3160.126473841" lastFinishedPulling="2026-02-27 17:46:50.778867613 +0000 UTC m=+3209.294665240" observedRunningTime="2026-02-27 17:46:51.274386361 +0000 UTC m=+3209.790183968" watchObservedRunningTime="2026-02-27 17:46:51.283193288 +0000 UTC m=+3209.798990895" Feb 27 17:47:00 crc kubenswrapper[4708]: I0227 17:47:00.788562 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:47:00 crc kubenswrapper[4708]: I0227 17:47:00.789311 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:47:00 crc kubenswrapper[4708]: I0227 17:47:00.882723 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:47:01 crc kubenswrapper[4708]: I0227 17:47:01.419602 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:47:01 crc kubenswrapper[4708]: I0227 17:47:01.606570 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jvtx"] Feb 27 17:47:02 crc kubenswrapper[4708]: I0227 17:47:02.433933 4708 scope.go:117] "RemoveContainer" containerID="2aefdfb9b9b29023d2382c8a46d4050e7b4cffe7a035ff170053619fe05c5487" Feb 27 17:47:02 crc kubenswrapper[4708]: I0227 17:47:02.493239 4708 scope.go:117] "RemoveContainer" containerID="b2ae793589e60d534400897341bc947b9a4e6b2c02b79e64f91bcb58b1ea7f9d" Feb 27 17:47:03 crc kubenswrapper[4708]: I0227 17:47:03.387339 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9jvtx" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="registry-server" containerID="cri-o://46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7" gracePeriod=2 Feb 27 17:47:03 crc kubenswrapper[4708]: I0227 17:47:03.934202 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.125908 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4tn5\" (UniqueName: \"kubernetes.io/projected/41b12347-557a-489b-afaf-41c6903fa9c7-kube-api-access-j4tn5\") pod \"41b12347-557a-489b-afaf-41c6903fa9c7\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.126010 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-catalog-content\") pod \"41b12347-557a-489b-afaf-41c6903fa9c7\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.126172 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-utilities\") pod \"41b12347-557a-489b-afaf-41c6903fa9c7\" (UID: \"41b12347-557a-489b-afaf-41c6903fa9c7\") " Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.127076 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-utilities" (OuterVolumeSpecName: "utilities") pod "41b12347-557a-489b-afaf-41c6903fa9c7" (UID: "41b12347-557a-489b-afaf-41c6903fa9c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.135207 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41b12347-557a-489b-afaf-41c6903fa9c7-kube-api-access-j4tn5" (OuterVolumeSpecName: "kube-api-access-j4tn5") pod "41b12347-557a-489b-afaf-41c6903fa9c7" (UID: "41b12347-557a-489b-afaf-41c6903fa9c7"). InnerVolumeSpecName "kube-api-access-j4tn5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.169345 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "41b12347-557a-489b-afaf-41c6903fa9c7" (UID: "41b12347-557a-489b-afaf-41c6903fa9c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.228407 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4tn5\" (UniqueName: \"kubernetes.io/projected/41b12347-557a-489b-afaf-41c6903fa9c7-kube-api-access-j4tn5\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.228668 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.228721 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.228905 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/41b12347-557a-489b-afaf-41c6903fa9c7-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:47:04 crc kubenswrapper[4708]: E0227 17:47:04.229088 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.401718 4708 generic.go:334] "Generic (PLEG): container finished" podID="41b12347-557a-489b-afaf-41c6903fa9c7" containerID="46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7" exitCode=0 Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.401783 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerDied","Data":"46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7"} Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.401805 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9jvtx" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.401906 4708 scope.go:117] "RemoveContainer" containerID="46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.401885 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9jvtx" event={"ID":"41b12347-557a-489b-afaf-41c6903fa9c7","Type":"ContainerDied","Data":"5641bdf144889404a9ac4f197f55933c21fcda7b7fb3e1bb2fea2446ea64ad51"} Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.430542 4708 scope.go:117] "RemoveContainer" containerID="a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.441273 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9jvtx"] Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.458263 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9jvtx"] Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.459386 4708 scope.go:117] "RemoveContainer" containerID="6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.508119 4708 scope.go:117] "RemoveContainer" containerID="46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7" Feb 27 17:47:04 crc kubenswrapper[4708]: E0227 17:47:04.508606 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7\": container with ID starting with 46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7 not found: ID does not exist" containerID="46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.508645 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7"} err="failed to get container status \"46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7\": rpc error: code = NotFound desc = could not find container \"46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7\": container with ID starting with 46931b7077af5c4c8c366671e5e524c89778eecf2f6e5f6f187a74a7b86a2aa7 not found: ID does not exist" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.508672 4708 scope.go:117] "RemoveContainer" containerID="a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1" Feb 27 17:47:04 crc kubenswrapper[4708]: E0227 17:47:04.509001 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1\": container with ID starting with a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1 not found: ID does not exist" containerID="a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.509019 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1"} err="failed to get container status \"a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1\": rpc error: code = NotFound desc = could not find container \"a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1\": container with ID starting with a3b3bc275fdc1138a72779aabe8cb427d7aa76e00532972d7297e971b39eb4a1 not found: ID does not exist" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.509034 4708 scope.go:117] "RemoveContainer" containerID="6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2" Feb 27 17:47:04 crc kubenswrapper[4708]: E0227 17:47:04.509238 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2\": container with ID starting with 6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2 not found: ID does not exist" containerID="6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2" Feb 27 17:47:04 crc kubenswrapper[4708]: I0227 17:47:04.509288 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2"} err="failed to get container status \"6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2\": rpc error: code = NotFound desc = could not find container \"6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2\": container with ID starting with 6e0de3751a36b18bea8543e50031c68f71dcde838ea513d04741a894184c30f2 not found: ID does not exist" Feb 27 17:47:06 crc kubenswrapper[4708]: I0227 17:47:06.249142 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" path="/var/lib/kubelet/pods/41b12347-557a-489b-afaf-41c6903fa9c7/volumes" Feb 27 17:47:15 crc kubenswrapper[4708]: I0227 17:47:15.229938 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:47:15 crc kubenswrapper[4708]: E0227 17:47:15.231162 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:47:26 crc kubenswrapper[4708]: I0227 17:47:26.230600 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:47:26 crc kubenswrapper[4708]: E0227 17:47:26.231686 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:47:39 crc kubenswrapper[4708]: I0227 17:47:39.228643 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:47:39 crc kubenswrapper[4708]: E0227 17:47:39.229398 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:47:51 crc kubenswrapper[4708]: I0227 17:47:51.228731 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:47:51 crc kubenswrapper[4708]: E0227 17:47:51.229497 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.200933 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536908-zq845"] Feb 27 17:48:00 crc kubenswrapper[4708]: E0227 17:48:00.201923 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="registry-server" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.201941 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="registry-server" Feb 27 17:48:00 crc kubenswrapper[4708]: E0227 17:48:00.201953 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="registry-server" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.201970 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="registry-server" Feb 27 17:48:00 crc kubenswrapper[4708]: E0227 17:48:00.202000 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="extract-utilities" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202008 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="extract-utilities" Feb 27 17:48:00 crc kubenswrapper[4708]: E0227 17:48:00.202025 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e26151a-c427-4ef6-b466-42266139ce98" containerName="oc" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202031 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e26151a-c427-4ef6-b466-42266139ce98" containerName="oc" Feb 27 17:48:00 crc kubenswrapper[4708]: E0227 17:48:00.202045 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="extract-utilities" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202050 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="extract-utilities" Feb 27 17:48:00 crc kubenswrapper[4708]: E0227 17:48:00.202060 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="extract-content" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202066 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="extract-content" Feb 27 17:48:00 crc kubenswrapper[4708]: E0227 17:48:00.202082 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="extract-content" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202088 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="extract-content" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202271 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e26151a-c427-4ef6-b466-42266139ce98" containerName="oc" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202284 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="41b12347-557a-489b-afaf-41c6903fa9c7" containerName="registry-server" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.202301 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="98f5575d-6149-4315-9fb5-e8f0cfc2bc37" containerName="registry-server" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.203118 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-zq845" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.205496 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.205786 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.205872 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.214679 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-zq845"] Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.373158 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgsr9\" (UniqueName: \"kubernetes.io/projected/5eef5971-bdc4-487b-badd-bdc41823889f-kube-api-access-bgsr9\") pod \"auto-csr-approver-29536908-zq845\" (UID: \"5eef5971-bdc4-487b-badd-bdc41823889f\") " pod="openshift-infra/auto-csr-approver-29536908-zq845" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.475248 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgsr9\" (UniqueName: \"kubernetes.io/projected/5eef5971-bdc4-487b-badd-bdc41823889f-kube-api-access-bgsr9\") pod \"auto-csr-approver-29536908-zq845\" (UID: \"5eef5971-bdc4-487b-badd-bdc41823889f\") " pod="openshift-infra/auto-csr-approver-29536908-zq845" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.495673 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgsr9\" (UniqueName: \"kubernetes.io/projected/5eef5971-bdc4-487b-badd-bdc41823889f-kube-api-access-bgsr9\") pod \"auto-csr-approver-29536908-zq845\" (UID: \"5eef5971-bdc4-487b-badd-bdc41823889f\") " pod="openshift-infra/auto-csr-approver-29536908-zq845" Feb 27 17:48:00 crc kubenswrapper[4708]: I0227 17:48:00.535305 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-zq845" Feb 27 17:48:01 crc kubenswrapper[4708]: I0227 17:48:01.040867 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-zq845"] Feb 27 17:48:01 crc kubenswrapper[4708]: I0227 17:48:01.080497 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-zq845" event={"ID":"5eef5971-bdc4-487b-badd-bdc41823889f","Type":"ContainerStarted","Data":"c3befa7e86d62e0ad9a1a4d0ca3d7292fb933a6bfcb44a3f20ecf64d2f7cb3d1"} Feb 27 17:48:02 crc kubenswrapper[4708]: E0227 17:48:02.017939 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:48:02 crc kubenswrapper[4708]: E0227 17:48:02.018249 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:48:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:48:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgsr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536908-zq845_openshift-infra(5eef5971-bdc4-487b-badd-bdc41823889f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:48:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:48:02 crc kubenswrapper[4708]: E0227 17:48:02.019605 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:48:02 crc kubenswrapper[4708]: E0227 17:48:02.090077 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:48:03 crc kubenswrapper[4708]: I0227 17:48:03.228918 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:48:03 crc kubenswrapper[4708]: E0227 17:48:03.230343 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:48:14 crc kubenswrapper[4708]: I0227 17:48:14.228605 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:48:14 crc kubenswrapper[4708]: E0227 17:48:14.229636 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:48:17 crc kubenswrapper[4708]: E0227 17:48:17.407429 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:48:17 crc kubenswrapper[4708]: E0227 17:48:17.407910 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:48:17 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:48:17 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgsr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536908-zq845_openshift-infra(5eef5971-bdc4-487b-badd-bdc41823889f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:48:17 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:48:17 crc kubenswrapper[4708]: E0227 17:48:17.409632 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:48:27 crc kubenswrapper[4708]: I0227 17:48:27.228805 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:48:27 crc kubenswrapper[4708]: E0227 17:48:27.229651 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:48:29 crc kubenswrapper[4708]: E0227 17:48:29.231096 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:48:40 crc kubenswrapper[4708]: I0227 17:48:40.229900 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:48:41 crc kubenswrapper[4708]: I0227 17:48:41.515513 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"63e575a8e32ef4c90c85e76af9a3f5d1acc3ab5df9c8fc9bf2827fca736a9ed5"} Feb 27 17:48:42 crc kubenswrapper[4708]: E0227 17:48:42.153812 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:48:42 crc kubenswrapper[4708]: E0227 17:48:42.154211 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:48:42 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:48:42 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgsr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536908-zq845_openshift-infra(5eef5971-bdc4-487b-badd-bdc41823889f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:48:42 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:48:42 crc kubenswrapper[4708]: E0227 17:48:42.155923 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:48:56 crc kubenswrapper[4708]: E0227 17:48:56.230448 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:49:08 crc kubenswrapper[4708]: E0227 17:49:08.233253 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:50:00 crc kubenswrapper[4708]: I0227 17:50:00.148634 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536910-gxxhz"] Feb 27 17:50:00 crc kubenswrapper[4708]: I0227 17:50:00.150822 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-gxxhz" Feb 27 17:50:00 crc kubenswrapper[4708]: I0227 17:50:00.162830 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-gxxhz"] Feb 27 17:50:00 crc kubenswrapper[4708]: I0227 17:50:00.246336 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs68v\" (UniqueName: \"kubernetes.io/projected/6d203876-ad77-46ba-a151-e2af5363930c-kube-api-access-cs68v\") pod \"auto-csr-approver-29536910-gxxhz\" (UID: \"6d203876-ad77-46ba-a151-e2af5363930c\") " pod="openshift-infra/auto-csr-approver-29536910-gxxhz" Feb 27 17:50:00 crc kubenswrapper[4708]: I0227 17:50:00.348531 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs68v\" (UniqueName: \"kubernetes.io/projected/6d203876-ad77-46ba-a151-e2af5363930c-kube-api-access-cs68v\") pod \"auto-csr-approver-29536910-gxxhz\" (UID: \"6d203876-ad77-46ba-a151-e2af5363930c\") " pod="openshift-infra/auto-csr-approver-29536910-gxxhz" Feb 27 17:50:00 crc kubenswrapper[4708]: I0227 17:50:00.368995 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs68v\" (UniqueName: \"kubernetes.io/projected/6d203876-ad77-46ba-a151-e2af5363930c-kube-api-access-cs68v\") pod \"auto-csr-approver-29536910-gxxhz\" (UID: \"6d203876-ad77-46ba-a151-e2af5363930c\") " pod="openshift-infra/auto-csr-approver-29536910-gxxhz" Feb 27 17:50:00 crc kubenswrapper[4708]: I0227 17:50:00.481693 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-gxxhz" Feb 27 17:50:01 crc kubenswrapper[4708]: I0227 17:50:01.070958 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-gxxhz"] Feb 27 17:50:01 crc kubenswrapper[4708]: I0227 17:50:01.371888 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536910-gxxhz" event={"ID":"6d203876-ad77-46ba-a151-e2af5363930c","Type":"ContainerStarted","Data":"9d58908eda52c06b596f8c1304ddfae5a95cefa91e74f2c43d015b6d829c162a"} Feb 27 17:50:03 crc kubenswrapper[4708]: I0227 17:50:03.403170 4708 generic.go:334] "Generic (PLEG): container finished" podID="6d203876-ad77-46ba-a151-e2af5363930c" containerID="1ec2352169dc1c3da7f06823b726774e3559e155036a7b581147c01ce5bc1803" exitCode=0 Feb 27 17:50:03 crc kubenswrapper[4708]: I0227 17:50:03.403230 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536910-gxxhz" event={"ID":"6d203876-ad77-46ba-a151-e2af5363930c","Type":"ContainerDied","Data":"1ec2352169dc1c3da7f06823b726774e3559e155036a7b581147c01ce5bc1803"} Feb 27 17:50:04 crc kubenswrapper[4708]: I0227 17:50:04.900576 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-gxxhz" Feb 27 17:50:04 crc kubenswrapper[4708]: I0227 17:50:04.946514 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs68v\" (UniqueName: \"kubernetes.io/projected/6d203876-ad77-46ba-a151-e2af5363930c-kube-api-access-cs68v\") pod \"6d203876-ad77-46ba-a151-e2af5363930c\" (UID: \"6d203876-ad77-46ba-a151-e2af5363930c\") " Feb 27 17:50:04 crc kubenswrapper[4708]: I0227 17:50:04.954884 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d203876-ad77-46ba-a151-e2af5363930c-kube-api-access-cs68v" (OuterVolumeSpecName: "kube-api-access-cs68v") pod "6d203876-ad77-46ba-a151-e2af5363930c" (UID: "6d203876-ad77-46ba-a151-e2af5363930c"). InnerVolumeSpecName "kube-api-access-cs68v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:50:05 crc kubenswrapper[4708]: I0227 17:50:05.049022 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs68v\" (UniqueName: \"kubernetes.io/projected/6d203876-ad77-46ba-a151-e2af5363930c-kube-api-access-cs68v\") on node \"crc\" DevicePath \"\"" Feb 27 17:50:05 crc kubenswrapper[4708]: I0227 17:50:05.431030 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536910-gxxhz" event={"ID":"6d203876-ad77-46ba-a151-e2af5363930c","Type":"ContainerDied","Data":"9d58908eda52c06b596f8c1304ddfae5a95cefa91e74f2c43d015b6d829c162a"} Feb 27 17:50:05 crc kubenswrapper[4708]: I0227 17:50:05.431388 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d58908eda52c06b596f8c1304ddfae5a95cefa91e74f2c43d015b6d829c162a" Feb 27 17:50:05 crc kubenswrapper[4708]: I0227 17:50:05.431105 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536910-gxxhz" Feb 27 17:50:05 crc kubenswrapper[4708]: I0227 17:50:05.986982 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-mlgds"] Feb 27 17:50:06 crc kubenswrapper[4708]: I0227 17:50:06.004221 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536902-mlgds"] Feb 27 17:50:06 crc kubenswrapper[4708]: I0227 17:50:06.248341 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6db7652-4a2c-4ae5-9431-ffde3373ae3f" path="/var/lib/kubelet/pods/d6db7652-4a2c-4ae5-9431-ffde3373ae3f/volumes" Feb 27 17:50:14 crc kubenswrapper[4708]: E0227 17:50:14.752379 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:50:14 crc kubenswrapper[4708]: E0227 17:50:14.752838 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:50:14 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:50:14 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-bgsr9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536908-zq845_openshift-infra(5eef5971-bdc4-487b-badd-bdc41823889f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:50:14 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:50:14 crc kubenswrapper[4708]: E0227 17:50:14.754053 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:50:27 crc kubenswrapper[4708]: E0227 17:50:27.232115 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:50:40 crc kubenswrapper[4708]: E0227 17:50:40.230814 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.440008 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-tql2j"] Feb 27 17:50:44 crc kubenswrapper[4708]: E0227 17:50:44.441199 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d203876-ad77-46ba-a151-e2af5363930c" containerName="oc" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.441221 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d203876-ad77-46ba-a151-e2af5363930c" containerName="oc" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.441625 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d203876-ad77-46ba-a151-e2af5363930c" containerName="oc" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.444282 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.461613 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tql2j"] Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.531065 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2t7f2\" (UniqueName: \"kubernetes.io/projected/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-kube-api-access-2t7f2\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.531206 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-catalog-content\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.531262 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-utilities\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.633823 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2t7f2\" (UniqueName: \"kubernetes.io/projected/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-kube-api-access-2t7f2\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.634047 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-catalog-content\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.634130 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-utilities\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.634723 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-catalog-content\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.635121 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-utilities\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.657752 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2t7f2\" (UniqueName: \"kubernetes.io/projected/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-kube-api-access-2t7f2\") pod \"redhat-operators-tql2j\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:44 crc kubenswrapper[4708]: I0227 17:50:44.770559 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:50:45 crc kubenswrapper[4708]: I0227 17:50:45.237004 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-tql2j"] Feb 27 17:50:45 crc kubenswrapper[4708]: W0227 17:50:45.248049 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f9d3ed_ac7f_45d8_b5a7_4793d8b8cf8d.slice/crio-d41ba4579745182b82d83cc992389cd825b69dfb94acba69f64d57aa5a4ecc34 WatchSource:0}: Error finding container d41ba4579745182b82d83cc992389cd825b69dfb94acba69f64d57aa5a4ecc34: Status 404 returned error can't find the container with id d41ba4579745182b82d83cc992389cd825b69dfb94acba69f64d57aa5a4ecc34 Feb 27 17:50:45 crc kubenswrapper[4708]: I0227 17:50:45.931310 4708 generic.go:334] "Generic (PLEG): container finished" podID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerID="9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1" exitCode=0 Feb 27 17:50:45 crc kubenswrapper[4708]: I0227 17:50:45.931361 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tql2j" event={"ID":"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d","Type":"ContainerDied","Data":"9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1"} Feb 27 17:50:45 crc kubenswrapper[4708]: I0227 17:50:45.932043 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tql2j" event={"ID":"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d","Type":"ContainerStarted","Data":"d41ba4579745182b82d83cc992389cd825b69dfb94acba69f64d57aa5a4ecc34"} Feb 27 17:50:45 crc kubenswrapper[4708]: I0227 17:50:45.934069 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:50:46 crc kubenswrapper[4708]: E0227 17:50:46.610081 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:50:46 crc kubenswrapper[4708]: E0227 17:50:46.610489 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t7f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tql2j_openshift-marketplace(88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:50:46 crc kubenswrapper[4708]: E0227 17:50:46.611645 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" Feb 27 17:50:46 crc kubenswrapper[4708]: E0227 17:50:46.943256 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" Feb 27 17:50:55 crc kubenswrapper[4708]: E0227 17:50:55.231594 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:51:02 crc kubenswrapper[4708]: I0227 17:51:02.707675 4708 scope.go:117] "RemoveContainer" containerID="36261dd96463185d9e900b72935ab212f2c797ee175368fe4afc0d52f009b1e3" Feb 27 17:51:05 crc kubenswrapper[4708]: I0227 17:51:05.631176 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:51:05 crc kubenswrapper[4708]: I0227 17:51:05.631492 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:51:09 crc kubenswrapper[4708]: E0227 17:51:09.232201 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:51:22 crc kubenswrapper[4708]: E0227 17:51:22.245671 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:51:32 crc kubenswrapper[4708]: E0227 17:51:32.012790 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:51:32 crc kubenswrapper[4708]: E0227 17:51:32.013512 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t7f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tql2j_openshift-marketplace(88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:51:32 crc kubenswrapper[4708]: E0227 17:51:32.014921 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" Feb 27 17:51:33 crc kubenswrapper[4708]: E0227 17:51:33.231277 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536908-zq845" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" Feb 27 17:51:35 crc kubenswrapper[4708]: I0227 17:51:35.631764 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:51:35 crc kubenswrapper[4708]: I0227 17:51:35.632412 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:51:45 crc kubenswrapper[4708]: E0227 17:51:45.231612 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" Feb 27 17:51:47 crc kubenswrapper[4708]: I0227 17:51:47.590728 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-zq845" event={"ID":"5eef5971-bdc4-487b-badd-bdc41823889f","Type":"ContainerStarted","Data":"1edc4084819b9eb16d611f08e6981dc85027a46adb7eb874e493475693498c2e"} Feb 27 17:51:47 crc kubenswrapper[4708]: I0227 17:51:47.614859 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536908-zq845" podStartSLOduration=1.552667498 podStartE2EDuration="3m47.614827622s" podCreationTimestamp="2026-02-27 17:48:00 +0000 UTC" firstStartedPulling="2026-02-27 17:48:01.0375463 +0000 UTC m=+3279.553343887" lastFinishedPulling="2026-02-27 17:51:47.099706424 +0000 UTC m=+3505.615504011" observedRunningTime="2026-02-27 17:51:47.608489804 +0000 UTC m=+3506.124287431" watchObservedRunningTime="2026-02-27 17:51:47.614827622 +0000 UTC m=+3506.130625239" Feb 27 17:51:48 crc kubenswrapper[4708]: I0227 17:51:48.607157 4708 generic.go:334] "Generic (PLEG): container finished" podID="5eef5971-bdc4-487b-badd-bdc41823889f" containerID="1edc4084819b9eb16d611f08e6981dc85027a46adb7eb874e493475693498c2e" exitCode=0 Feb 27 17:51:48 crc kubenswrapper[4708]: I0227 17:51:48.607303 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-zq845" event={"ID":"5eef5971-bdc4-487b-badd-bdc41823889f","Type":"ContainerDied","Data":"1edc4084819b9eb16d611f08e6981dc85027a46adb7eb874e493475693498c2e"} Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.055934 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-zq845" Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.139876 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgsr9\" (UniqueName: \"kubernetes.io/projected/5eef5971-bdc4-487b-badd-bdc41823889f-kube-api-access-bgsr9\") pod \"5eef5971-bdc4-487b-badd-bdc41823889f\" (UID: \"5eef5971-bdc4-487b-badd-bdc41823889f\") " Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.161827 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5eef5971-bdc4-487b-badd-bdc41823889f-kube-api-access-bgsr9" (OuterVolumeSpecName: "kube-api-access-bgsr9") pod "5eef5971-bdc4-487b-badd-bdc41823889f" (UID: "5eef5971-bdc4-487b-badd-bdc41823889f"). InnerVolumeSpecName "kube-api-access-bgsr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.243154 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgsr9\" (UniqueName: \"kubernetes.io/projected/5eef5971-bdc4-487b-badd-bdc41823889f-kube-api-access-bgsr9\") on node \"crc\" DevicePath \"\"" Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.648794 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536908-zq845" event={"ID":"5eef5971-bdc4-487b-badd-bdc41823889f","Type":"ContainerDied","Data":"c3befa7e86d62e0ad9a1a4d0ca3d7292fb933a6bfcb44a3f20ecf64d2f7cb3d1"} Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.648829 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3befa7e86d62e0ad9a1a4d0ca3d7292fb933a6bfcb44a3f20ecf64d2f7cb3d1" Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.648893 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536908-zq845" Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.679566 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-8bkhj"] Feb 27 17:51:50 crc kubenswrapper[4708]: I0227 17:51:50.692683 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536904-8bkhj"] Feb 27 17:51:52 crc kubenswrapper[4708]: I0227 17:51:52.244495 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1b82144-2072-420e-988a-bc5cea74f1ef" path="/var/lib/kubelet/pods/b1b82144-2072-420e-988a-bc5cea74f1ef/volumes" Feb 27 17:51:58 crc kubenswrapper[4708]: E0227 17:51:58.912902 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 17:51:58 crc kubenswrapper[4708]: E0227 17:51:58.913806 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2t7f2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-tql2j_openshift-marketplace(88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:51:58 crc kubenswrapper[4708]: E0227 17:51:58.915234 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.170873 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536912-fvtmd"] Feb 27 17:52:00 crc kubenswrapper[4708]: E0227 17:52:00.171691 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" containerName="oc" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.171718 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" containerName="oc" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.172236 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" containerName="oc" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.173581 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-fvtmd" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.176585 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.176980 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.179603 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.186972 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-fvtmd"] Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.271005 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnw8b\" (UniqueName: \"kubernetes.io/projected/eb824baf-4fac-4634-85d1-d126cb326116-kube-api-access-hnw8b\") pod \"auto-csr-approver-29536912-fvtmd\" (UID: \"eb824baf-4fac-4634-85d1-d126cb326116\") " pod="openshift-infra/auto-csr-approver-29536912-fvtmd" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.374236 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnw8b\" (UniqueName: \"kubernetes.io/projected/eb824baf-4fac-4634-85d1-d126cb326116-kube-api-access-hnw8b\") pod \"auto-csr-approver-29536912-fvtmd\" (UID: \"eb824baf-4fac-4634-85d1-d126cb326116\") " pod="openshift-infra/auto-csr-approver-29536912-fvtmd" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.403609 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnw8b\" (UniqueName: \"kubernetes.io/projected/eb824baf-4fac-4634-85d1-d126cb326116-kube-api-access-hnw8b\") pod \"auto-csr-approver-29536912-fvtmd\" (UID: \"eb824baf-4fac-4634-85d1-d126cb326116\") " pod="openshift-infra/auto-csr-approver-29536912-fvtmd" Feb 27 17:52:00 crc kubenswrapper[4708]: I0227 17:52:00.512185 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-fvtmd" Feb 27 17:52:01 crc kubenswrapper[4708]: I0227 17:52:01.005134 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-fvtmd"] Feb 27 17:52:01 crc kubenswrapper[4708]: W0227 17:52:01.006889 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb824baf_4fac_4634_85d1_d126cb326116.slice/crio-74640a952056f95ef2236a27fc9a8aa1ce3ecfec796636ef92d6618bacbcd568 WatchSource:0}: Error finding container 74640a952056f95ef2236a27fc9a8aa1ce3ecfec796636ef92d6618bacbcd568: Status 404 returned error can't find the container with id 74640a952056f95ef2236a27fc9a8aa1ce3ecfec796636ef92d6618bacbcd568 Feb 27 17:52:01 crc kubenswrapper[4708]: I0227 17:52:01.782175 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536912-fvtmd" event={"ID":"eb824baf-4fac-4634-85d1-d126cb326116","Type":"ContainerStarted","Data":"74640a952056f95ef2236a27fc9a8aa1ce3ecfec796636ef92d6618bacbcd568"} Feb 27 17:52:02 crc kubenswrapper[4708]: I0227 17:52:02.793580 4708 generic.go:334] "Generic (PLEG): container finished" podID="eb824baf-4fac-4634-85d1-d126cb326116" containerID="28d0e25079553eacf5e2b7ab66a909e110b6de80827b7bf13e1405c8256528d8" exitCode=0 Feb 27 17:52:02 crc kubenswrapper[4708]: I0227 17:52:02.793653 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536912-fvtmd" event={"ID":"eb824baf-4fac-4634-85d1-d126cb326116","Type":"ContainerDied","Data":"28d0e25079553eacf5e2b7ab66a909e110b6de80827b7bf13e1405c8256528d8"} Feb 27 17:52:02 crc kubenswrapper[4708]: I0227 17:52:02.806322 4708 scope.go:117] "RemoveContainer" containerID="6c3c8db7ecf272f53b7e228182156a26205254347b1853e5f0dd80de23a59d90" Feb 27 17:52:04 crc kubenswrapper[4708]: I0227 17:52:04.324494 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-fvtmd" Feb 27 17:52:04 crc kubenswrapper[4708]: I0227 17:52:04.362590 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hnw8b\" (UniqueName: \"kubernetes.io/projected/eb824baf-4fac-4634-85d1-d126cb326116-kube-api-access-hnw8b\") pod \"eb824baf-4fac-4634-85d1-d126cb326116\" (UID: \"eb824baf-4fac-4634-85d1-d126cb326116\") " Feb 27 17:52:04 crc kubenswrapper[4708]: I0227 17:52:04.371776 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb824baf-4fac-4634-85d1-d126cb326116-kube-api-access-hnw8b" (OuterVolumeSpecName: "kube-api-access-hnw8b") pod "eb824baf-4fac-4634-85d1-d126cb326116" (UID: "eb824baf-4fac-4634-85d1-d126cb326116"). InnerVolumeSpecName "kube-api-access-hnw8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:52:04 crc kubenswrapper[4708]: I0227 17:52:04.465342 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hnw8b\" (UniqueName: \"kubernetes.io/projected/eb824baf-4fac-4634-85d1-d126cb326116-kube-api-access-hnw8b\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:04 crc kubenswrapper[4708]: I0227 17:52:04.821390 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536912-fvtmd" event={"ID":"eb824baf-4fac-4634-85d1-d126cb326116","Type":"ContainerDied","Data":"74640a952056f95ef2236a27fc9a8aa1ce3ecfec796636ef92d6618bacbcd568"} Feb 27 17:52:04 crc kubenswrapper[4708]: I0227 17:52:04.821447 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74640a952056f95ef2236a27fc9a8aa1ce3ecfec796636ef92d6618bacbcd568" Feb 27 17:52:04 crc kubenswrapper[4708]: I0227 17:52:04.821507 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536912-fvtmd" Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.404591 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-6hkdf"] Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.419381 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536906-6hkdf"] Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.632337 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.632756 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.632804 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.634426 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"63e575a8e32ef4c90c85e76af9a3f5d1acc3ab5df9c8fc9bf2827fca736a9ed5"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.634653 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://63e575a8e32ef4c90c85e76af9a3f5d1acc3ab5df9c8fc9bf2827fca736a9ed5" gracePeriod=600 Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.837401 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="63e575a8e32ef4c90c85e76af9a3f5d1acc3ab5df9c8fc9bf2827fca736a9ed5" exitCode=0 Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.837483 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"63e575a8e32ef4c90c85e76af9a3f5d1acc3ab5df9c8fc9bf2827fca736a9ed5"} Feb 27 17:52:05 crc kubenswrapper[4708]: I0227 17:52:05.837534 4708 scope.go:117] "RemoveContainer" containerID="c65c1fea386c2f88aa207bfa86667eaf42d05716936e2ca5c32a3bc147271722" Feb 27 17:52:06 crc kubenswrapper[4708]: I0227 17:52:06.247639 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7e26151a-c427-4ef6-b466-42266139ce98" path="/var/lib/kubelet/pods/7e26151a-c427-4ef6-b466-42266139ce98/volumes" Feb 27 17:52:06 crc kubenswrapper[4708]: I0227 17:52:06.855215 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4"} Feb 27 17:52:14 crc kubenswrapper[4708]: E0227 17:52:14.233641 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" Feb 27 17:52:27 crc kubenswrapper[4708]: E0227 17:52:27.230865 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" Feb 27 17:52:41 crc kubenswrapper[4708]: I0227 17:52:41.283354 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tql2j" event={"ID":"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d","Type":"ContainerStarted","Data":"bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38"} Feb 27 17:52:45 crc kubenswrapper[4708]: I0227 17:52:45.370742 4708 generic.go:334] "Generic (PLEG): container finished" podID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerID="bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38" exitCode=0 Feb 27 17:52:45 crc kubenswrapper[4708]: I0227 17:52:45.370831 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tql2j" event={"ID":"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d","Type":"ContainerDied","Data":"bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38"} Feb 27 17:52:46 crc kubenswrapper[4708]: I0227 17:52:46.388016 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tql2j" event={"ID":"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d","Type":"ContainerStarted","Data":"6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e"} Feb 27 17:52:46 crc kubenswrapper[4708]: I0227 17:52:46.414701 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-tql2j" podStartSLOduration=2.582802955 podStartE2EDuration="2m2.414682555s" podCreationTimestamp="2026-02-27 17:50:44 +0000 UTC" firstStartedPulling="2026-02-27 17:50:45.933772864 +0000 UTC m=+3444.449570451" lastFinishedPulling="2026-02-27 17:52:45.765652464 +0000 UTC m=+3564.281450051" observedRunningTime="2026-02-27 17:52:46.407649578 +0000 UTC m=+3564.923447205" watchObservedRunningTime="2026-02-27 17:52:46.414682555 +0000 UTC m=+3564.930480152" Feb 27 17:52:54 crc kubenswrapper[4708]: I0227 17:52:54.770921 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:52:54 crc kubenswrapper[4708]: I0227 17:52:54.771460 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:52:54 crc kubenswrapper[4708]: I0227 17:52:54.827253 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:52:55 crc kubenswrapper[4708]: I0227 17:52:55.569964 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:52:55 crc kubenswrapper[4708]: I0227 17:52:55.632797 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tql2j"] Feb 27 17:52:57 crc kubenswrapper[4708]: I0227 17:52:57.508719 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-tql2j" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="registry-server" containerID="cri-o://6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e" gracePeriod=2 Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.106060 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.222777 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-catalog-content\") pod \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.223021 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-utilities\") pod \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.223090 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2t7f2\" (UniqueName: \"kubernetes.io/projected/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-kube-api-access-2t7f2\") pod \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\" (UID: \"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d\") " Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.224654 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-utilities" (OuterVolumeSpecName: "utilities") pod "88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" (UID: "88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.234153 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-kube-api-access-2t7f2" (OuterVolumeSpecName: "kube-api-access-2t7f2") pod "88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" (UID: "88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d"). InnerVolumeSpecName "kube-api-access-2t7f2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.326248 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.326283 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2t7f2\" (UniqueName: \"kubernetes.io/projected/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-kube-api-access-2t7f2\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.363945 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" (UID: "88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.430574 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.524276 4708 generic.go:334] "Generic (PLEG): container finished" podID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerID="6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e" exitCode=0 Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.524368 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tql2j" event={"ID":"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d","Type":"ContainerDied","Data":"6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e"} Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.524411 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-tql2j" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.524449 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-tql2j" event={"ID":"88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d","Type":"ContainerDied","Data":"d41ba4579745182b82d83cc992389cd825b69dfb94acba69f64d57aa5a4ecc34"} Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.524484 4708 scope.go:117] "RemoveContainer" containerID="6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.565172 4708 scope.go:117] "RemoveContainer" containerID="bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.572718 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-tql2j"] Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.613936 4708 scope.go:117] "RemoveContainer" containerID="9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.616988 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-tql2j"] Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.671816 4708 scope.go:117] "RemoveContainer" containerID="6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e" Feb 27 17:52:58 crc kubenswrapper[4708]: E0227 17:52:58.672916 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e\": container with ID starting with 6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e not found: ID does not exist" containerID="6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.672996 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e"} err="failed to get container status \"6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e\": rpc error: code = NotFound desc = could not find container \"6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e\": container with ID starting with 6bad4f1a07d954c93dc53d2702cc1b47f25569ac5c54660dcadc477b55dc556e not found: ID does not exist" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.673041 4708 scope.go:117] "RemoveContainer" containerID="bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38" Feb 27 17:52:58 crc kubenswrapper[4708]: E0227 17:52:58.673690 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38\": container with ID starting with bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38 not found: ID does not exist" containerID="bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.673777 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38"} err="failed to get container status \"bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38\": rpc error: code = NotFound desc = could not find container \"bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38\": container with ID starting with bd34accd75dc0c80c33c3fcdbb7c9cd8d873f73f29ccc834be97ba7ffeb9ca38 not found: ID does not exist" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.673825 4708 scope.go:117] "RemoveContainer" containerID="9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1" Feb 27 17:52:58 crc kubenswrapper[4708]: E0227 17:52:58.674316 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1\": container with ID starting with 9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1 not found: ID does not exist" containerID="9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1" Feb 27 17:52:58 crc kubenswrapper[4708]: I0227 17:52:58.674372 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1"} err="failed to get container status \"9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1\": rpc error: code = NotFound desc = could not find container \"9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1\": container with ID starting with 9bcf399ba11e0abc28227b4239478d6c943edcd7f0f5049351526d0e435845b1 not found: ID does not exist" Feb 27 17:53:00 crc kubenswrapper[4708]: I0227 17:53:00.247460 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" path="/var/lib/kubelet/pods/88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d/volumes" Feb 27 17:53:02 crc kubenswrapper[4708]: I0227 17:53:02.872922 4708 scope.go:117] "RemoveContainer" containerID="3949841a7f0a843906dc0cdd5a221d62d142098aedc8494c2c8570d5cb3a0e0b" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.159609 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536914-spkws"] Feb 27 17:54:00 crc kubenswrapper[4708]: E0227 17:54:00.161217 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb824baf-4fac-4634-85d1-d126cb326116" containerName="oc" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.161235 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb824baf-4fac-4634-85d1-d126cb326116" containerName="oc" Feb 27 17:54:00 crc kubenswrapper[4708]: E0227 17:54:00.161247 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="registry-server" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.161254 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="registry-server" Feb 27 17:54:00 crc kubenswrapper[4708]: E0227 17:54:00.161273 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="extract-content" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.161282 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="extract-content" Feb 27 17:54:00 crc kubenswrapper[4708]: E0227 17:54:00.161301 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="extract-utilities" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.161309 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="extract-utilities" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.161644 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f9d3ed-ac7f-45d8-b5a7-4793d8b8cf8d" containerName="registry-server" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.161681 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb824baf-4fac-4634-85d1-d126cb326116" containerName="oc" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.162887 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-spkws" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.166177 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.166185 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.166788 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.183113 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-spkws"] Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.308391 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfzb\" (UniqueName: \"kubernetes.io/projected/e8cd8181-76b2-4bed-b9a2-7e175bfa46bc-kube-api-access-hpfzb\") pod \"auto-csr-approver-29536914-spkws\" (UID: \"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc\") " pod="openshift-infra/auto-csr-approver-29536914-spkws" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.411491 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfzb\" (UniqueName: \"kubernetes.io/projected/e8cd8181-76b2-4bed-b9a2-7e175bfa46bc-kube-api-access-hpfzb\") pod \"auto-csr-approver-29536914-spkws\" (UID: \"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc\") " pod="openshift-infra/auto-csr-approver-29536914-spkws" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.430623 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfzb\" (UniqueName: \"kubernetes.io/projected/e8cd8181-76b2-4bed-b9a2-7e175bfa46bc-kube-api-access-hpfzb\") pod \"auto-csr-approver-29536914-spkws\" (UID: \"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc\") " pod="openshift-infra/auto-csr-approver-29536914-spkws" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.484774 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-spkws" Feb 27 17:54:00 crc kubenswrapper[4708]: I0227 17:54:00.979292 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-spkws"] Feb 27 17:54:01 crc kubenswrapper[4708]: I0227 17:54:01.279250 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536914-spkws" event={"ID":"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc","Type":"ContainerStarted","Data":"bd7bab9ef629aea34471413b513a4c6c501ffbaa3942f732b98bac198b5fbf64"} Feb 27 17:54:03 crc kubenswrapper[4708]: I0227 17:54:03.303278 4708 generic.go:334] "Generic (PLEG): container finished" podID="e8cd8181-76b2-4bed-b9a2-7e175bfa46bc" containerID="ac1b234213245f1f46ffc714db2dc099c6d649f060ca88631451d30143106929" exitCode=0 Feb 27 17:54:03 crc kubenswrapper[4708]: I0227 17:54:03.303360 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536914-spkws" event={"ID":"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc","Type":"ContainerDied","Data":"ac1b234213245f1f46ffc714db2dc099c6d649f060ca88631451d30143106929"} Feb 27 17:54:04 crc kubenswrapper[4708]: I0227 17:54:04.837430 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-spkws" Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.018901 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpfzb\" (UniqueName: \"kubernetes.io/projected/e8cd8181-76b2-4bed-b9a2-7e175bfa46bc-kube-api-access-hpfzb\") pod \"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc\" (UID: \"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc\") " Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.025227 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e8cd8181-76b2-4bed-b9a2-7e175bfa46bc-kube-api-access-hpfzb" (OuterVolumeSpecName: "kube-api-access-hpfzb") pod "e8cd8181-76b2-4bed-b9a2-7e175bfa46bc" (UID: "e8cd8181-76b2-4bed-b9a2-7e175bfa46bc"). InnerVolumeSpecName "kube-api-access-hpfzb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.121297 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpfzb\" (UniqueName: \"kubernetes.io/projected/e8cd8181-76b2-4bed-b9a2-7e175bfa46bc-kube-api-access-hpfzb\") on node \"crc\" DevicePath \"\"" Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.328375 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536914-spkws" event={"ID":"e8cd8181-76b2-4bed-b9a2-7e175bfa46bc","Type":"ContainerDied","Data":"bd7bab9ef629aea34471413b513a4c6c501ffbaa3942f732b98bac198b5fbf64"} Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.328413 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd7bab9ef629aea34471413b513a4c6c501ffbaa3942f732b98bac198b5fbf64" Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.328424 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536914-spkws" Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.930662 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-zq845"] Feb 27 17:54:05 crc kubenswrapper[4708]: I0227 17:54:05.941923 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536908-zq845"] Feb 27 17:54:06 crc kubenswrapper[4708]: I0227 17:54:06.239332 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5eef5971-bdc4-487b-badd-bdc41823889f" path="/var/lib/kubelet/pods/5eef5971-bdc4-487b-badd-bdc41823889f/volumes" Feb 27 17:54:35 crc kubenswrapper[4708]: I0227 17:54:35.631380 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:54:35 crc kubenswrapper[4708]: I0227 17:54:35.631920 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:55:05 crc kubenswrapper[4708]: I0227 17:55:05.631729 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:55:05 crc kubenswrapper[4708]: I0227 17:55:05.632341 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:55:35 crc kubenswrapper[4708]: I0227 17:55:35.631413 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 17:55:35 crc kubenswrapper[4708]: I0227 17:55:35.632122 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 17:55:35 crc kubenswrapper[4708]: I0227 17:55:35.632182 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 17:55:35 crc kubenswrapper[4708]: I0227 17:55:35.633387 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 17:55:35 crc kubenswrapper[4708]: I0227 17:55:35.633487 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" gracePeriod=600 Feb 27 17:55:35 crc kubenswrapper[4708]: E0227 17:55:35.767979 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:55:36 crc kubenswrapper[4708]: I0227 17:55:36.393131 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" exitCode=0 Feb 27 17:55:36 crc kubenswrapper[4708]: I0227 17:55:36.393166 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4"} Feb 27 17:55:36 crc kubenswrapper[4708]: I0227 17:55:36.393238 4708 scope.go:117] "RemoveContainer" containerID="63e575a8e32ef4c90c85e76af9a3f5d1acc3ab5df9c8fc9bf2827fca736a9ed5" Feb 27 17:55:36 crc kubenswrapper[4708]: I0227 17:55:36.394417 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:55:36 crc kubenswrapper[4708]: E0227 17:55:36.395630 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:55:50 crc kubenswrapper[4708]: I0227 17:55:50.228767 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:55:50 crc kubenswrapper[4708]: E0227 17:55:50.230197 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.668668 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-bmpqj"] Feb 27 17:55:54 crc kubenswrapper[4708]: E0227 17:55:54.669697 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e8cd8181-76b2-4bed-b9a2-7e175bfa46bc" containerName="oc" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.669714 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e8cd8181-76b2-4bed-b9a2-7e175bfa46bc" containerName="oc" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.669983 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e8cd8181-76b2-4bed-b9a2-7e175bfa46bc" containerName="oc" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.672335 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.702758 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmpqj"] Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.837596 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-utilities\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.838011 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-catalog-content\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.838055 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clqn6\" (UniqueName: \"kubernetes.io/projected/7070b4f1-ecc4-41b0-98b6-1fca58160118-kube-api-access-clqn6\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.939956 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-utilities\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.940244 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-catalog-content\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.940343 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-clqn6\" (UniqueName: \"kubernetes.io/projected/7070b4f1-ecc4-41b0-98b6-1fca58160118-kube-api-access-clqn6\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.940562 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-utilities\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.940652 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-catalog-content\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:54 crc kubenswrapper[4708]: I0227 17:55:54.958692 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-clqn6\" (UniqueName: \"kubernetes.io/projected/7070b4f1-ecc4-41b0-98b6-1fca58160118-kube-api-access-clqn6\") pod \"certified-operators-bmpqj\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:55 crc kubenswrapper[4708]: I0227 17:55:55.006179 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:55:55 crc kubenswrapper[4708]: W0227 17:55:55.461561 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7070b4f1_ecc4_41b0_98b6_1fca58160118.slice/crio-838288f319095f2440510abb76db1b796daeba1b4465cb2c8eb9ce1fe318b3f0 WatchSource:0}: Error finding container 838288f319095f2440510abb76db1b796daeba1b4465cb2c8eb9ce1fe318b3f0: Status 404 returned error can't find the container with id 838288f319095f2440510abb76db1b796daeba1b4465cb2c8eb9ce1fe318b3f0 Feb 27 17:55:55 crc kubenswrapper[4708]: I0227 17:55:55.461586 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-bmpqj"] Feb 27 17:55:55 crc kubenswrapper[4708]: I0227 17:55:55.633810 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmpqj" event={"ID":"7070b4f1-ecc4-41b0-98b6-1fca58160118","Type":"ContainerStarted","Data":"838288f319095f2440510abb76db1b796daeba1b4465cb2c8eb9ce1fe318b3f0"} Feb 27 17:55:56 crc kubenswrapper[4708]: I0227 17:55:56.648697 4708 generic.go:334] "Generic (PLEG): container finished" podID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerID="c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665" exitCode=0 Feb 27 17:55:56 crc kubenswrapper[4708]: I0227 17:55:56.648738 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmpqj" event={"ID":"7070b4f1-ecc4-41b0-98b6-1fca58160118","Type":"ContainerDied","Data":"c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665"} Feb 27 17:55:56 crc kubenswrapper[4708]: I0227 17:55:56.650832 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 17:55:57 crc kubenswrapper[4708]: E0227 17:55:57.251554 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 17:55:57 crc kubenswrapper[4708]: E0227 17:55:57.251974 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-clqn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-bmpqj_openshift-marketplace(7070b4f1-ecc4-41b0-98b6-1fca58160118): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:55:57 crc kubenswrapper[4708]: E0227 17:55:57.253843 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-bmpqj" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" Feb 27 17:55:57 crc kubenswrapper[4708]: E0227 17:55:57.664957 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-bmpqj" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.170800 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536916-jvlr9"] Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.173552 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.177717 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.178969 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.179423 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.186261 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-jvlr9"] Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.264456 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w8m2\" (UniqueName: \"kubernetes.io/projected/8e09bd3f-9e27-4168-b9a6-4855dd0dbaac-kube-api-access-6w8m2\") pod \"auto-csr-approver-29536916-jvlr9\" (UID: \"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac\") " pod="openshift-infra/auto-csr-approver-29536916-jvlr9" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.367774 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6w8m2\" (UniqueName: \"kubernetes.io/projected/8e09bd3f-9e27-4168-b9a6-4855dd0dbaac-kube-api-access-6w8m2\") pod \"auto-csr-approver-29536916-jvlr9\" (UID: \"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac\") " pod="openshift-infra/auto-csr-approver-29536916-jvlr9" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.402129 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6w8m2\" (UniqueName: \"kubernetes.io/projected/8e09bd3f-9e27-4168-b9a6-4855dd0dbaac-kube-api-access-6w8m2\") pod \"auto-csr-approver-29536916-jvlr9\" (UID: \"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac\") " pod="openshift-infra/auto-csr-approver-29536916-jvlr9" Feb 27 17:56:00 crc kubenswrapper[4708]: I0227 17:56:00.505718 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" Feb 27 17:56:01 crc kubenswrapper[4708]: I0227 17:56:01.040313 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-jvlr9"] Feb 27 17:56:01 crc kubenswrapper[4708]: I0227 17:56:01.720153 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" event={"ID":"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac","Type":"ContainerStarted","Data":"b5717ed6114acc87abab53178a91ccb98f95ea1aba551ced1d52814e3cbaf1f5"} Feb 27 17:56:02 crc kubenswrapper[4708]: E0227 17:56:02.006499 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:56:02 crc kubenswrapper[4708]: E0227 17:56:02.006765 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:56:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:56:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6w8m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536916-jvlr9_openshift-infra(8e09bd3f-9e27-4168-b9a6-4855dd0dbaac): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:56:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:56:02 crc kubenswrapper[4708]: E0227 17:56:02.008064 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" Feb 27 17:56:02 crc kubenswrapper[4708]: I0227 17:56:02.238836 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:56:02 crc kubenswrapper[4708]: E0227 17:56:02.239503 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:56:02 crc kubenswrapper[4708]: E0227 17:56:02.733836 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" Feb 27 17:56:13 crc kubenswrapper[4708]: I0227 17:56:13.229677 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:56:13 crc kubenswrapper[4708]: E0227 17:56:13.231219 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:56:13 crc kubenswrapper[4708]: I0227 17:56:13.897991 4708 generic.go:334] "Generic (PLEG): container finished" podID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerID="55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8" exitCode=0 Feb 27 17:56:13 crc kubenswrapper[4708]: I0227 17:56:13.898117 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmpqj" event={"ID":"7070b4f1-ecc4-41b0-98b6-1fca58160118","Type":"ContainerDied","Data":"55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8"} Feb 27 17:56:14 crc kubenswrapper[4708]: I0227 17:56:14.910361 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmpqj" event={"ID":"7070b4f1-ecc4-41b0-98b6-1fca58160118","Type":"ContainerStarted","Data":"a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20"} Feb 27 17:56:14 crc kubenswrapper[4708]: I0227 17:56:14.933192 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-bmpqj" podStartSLOduration=3.244238091 podStartE2EDuration="20.933174624s" podCreationTimestamp="2026-02-27 17:55:54 +0000 UTC" firstStartedPulling="2026-02-27 17:55:56.650598234 +0000 UTC m=+3755.166395821" lastFinishedPulling="2026-02-27 17:56:14.339534737 +0000 UTC m=+3772.855332354" observedRunningTime="2026-02-27 17:56:14.924544763 +0000 UTC m=+3773.440342350" watchObservedRunningTime="2026-02-27 17:56:14.933174624 +0000 UTC m=+3773.448972211" Feb 27 17:56:15 crc kubenswrapper[4708]: I0227 17:56:15.006480 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:56:15 crc kubenswrapper[4708]: I0227 17:56:15.006526 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:56:16 crc kubenswrapper[4708]: I0227 17:56:16.048340 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-bmpqj" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="registry-server" probeResult="failure" output=< Feb 27 17:56:16 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:56:16 crc kubenswrapper[4708]: > Feb 27 17:56:18 crc kubenswrapper[4708]: E0227 17:56:18.267511 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 17:56:18 crc kubenswrapper[4708]: E0227 17:56:18.268099 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 17:56:18 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 17:56:18 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6w8m2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536916-jvlr9_openshift-infra(8e09bd3f-9e27-4168-b9a6-4855dd0dbaac): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 17:56:18 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 17:56:18 crc kubenswrapper[4708]: E0227 17:56:18.269265 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" Feb 27 17:56:25 crc kubenswrapper[4708]: I0227 17:56:25.053949 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:56:25 crc kubenswrapper[4708]: I0227 17:56:25.110464 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:56:25 crc kubenswrapper[4708]: I0227 17:56:25.860208 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmpqj"] Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.051047 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-bmpqj" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="registry-server" containerID="cri-o://a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20" gracePeriod=2 Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.635349 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.728380 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-utilities\") pod \"7070b4f1-ecc4-41b0-98b6-1fca58160118\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.728754 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-clqn6\" (UniqueName: \"kubernetes.io/projected/7070b4f1-ecc4-41b0-98b6-1fca58160118-kube-api-access-clqn6\") pod \"7070b4f1-ecc4-41b0-98b6-1fca58160118\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.728937 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-catalog-content\") pod \"7070b4f1-ecc4-41b0-98b6-1fca58160118\" (UID: \"7070b4f1-ecc4-41b0-98b6-1fca58160118\") " Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.730015 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-utilities" (OuterVolumeSpecName: "utilities") pod "7070b4f1-ecc4-41b0-98b6-1fca58160118" (UID: "7070b4f1-ecc4-41b0-98b6-1fca58160118"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.735989 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7070b4f1-ecc4-41b0-98b6-1fca58160118-kube-api-access-clqn6" (OuterVolumeSpecName: "kube-api-access-clqn6") pod "7070b4f1-ecc4-41b0-98b6-1fca58160118" (UID: "7070b4f1-ecc4-41b0-98b6-1fca58160118"). InnerVolumeSpecName "kube-api-access-clqn6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.812056 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7070b4f1-ecc4-41b0-98b6-1fca58160118" (UID: "7070b4f1-ecc4-41b0-98b6-1fca58160118"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.832473 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.832730 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-clqn6\" (UniqueName: \"kubernetes.io/projected/7070b4f1-ecc4-41b0-98b6-1fca58160118-kube-api-access-clqn6\") on node \"crc\" DevicePath \"\"" Feb 27 17:56:27 crc kubenswrapper[4708]: I0227 17:56:27.832907 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7070b4f1-ecc4-41b0-98b6-1fca58160118-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.066029 4708 generic.go:334] "Generic (PLEG): container finished" podID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerID="a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20" exitCode=0 Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.066099 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmpqj" event={"ID":"7070b4f1-ecc4-41b0-98b6-1fca58160118","Type":"ContainerDied","Data":"a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20"} Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.066152 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-bmpqj" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.066177 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-bmpqj" event={"ID":"7070b4f1-ecc4-41b0-98b6-1fca58160118","Type":"ContainerDied","Data":"838288f319095f2440510abb76db1b796daeba1b4465cb2c8eb9ce1fe318b3f0"} Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.066210 4708 scope.go:117] "RemoveContainer" containerID="a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.102047 4708 scope.go:117] "RemoveContainer" containerID="55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.110285 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-bmpqj"] Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.128091 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-bmpqj"] Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.140264 4708 scope.go:117] "RemoveContainer" containerID="c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.222203 4708 scope.go:117] "RemoveContainer" containerID="a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20" Feb 27 17:56:28 crc kubenswrapper[4708]: E0227 17:56:28.222775 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20\": container with ID starting with a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20 not found: ID does not exist" containerID="a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.222842 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20"} err="failed to get container status \"a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20\": rpc error: code = NotFound desc = could not find container \"a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20\": container with ID starting with a7d473daa4d7cfcc950fadaf35469c7909b99e972570aa3fb1c81d33edfe5a20 not found: ID does not exist" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.223032 4708 scope.go:117] "RemoveContainer" containerID="55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8" Feb 27 17:56:28 crc kubenswrapper[4708]: E0227 17:56:28.223450 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8\": container with ID starting with 55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8 not found: ID does not exist" containerID="55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.223493 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8"} err="failed to get container status \"55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8\": rpc error: code = NotFound desc = could not find container \"55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8\": container with ID starting with 55e3d01774086ad7e4d1fad7b141f0adead0c358039844a03dad9f3dc32371f8 not found: ID does not exist" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.223522 4708 scope.go:117] "RemoveContainer" containerID="c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665" Feb 27 17:56:28 crc kubenswrapper[4708]: E0227 17:56:28.223815 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665\": container with ID starting with c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665 not found: ID does not exist" containerID="c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.223901 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665"} err="failed to get container status \"c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665\": rpc error: code = NotFound desc = could not find container \"c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665\": container with ID starting with c609310f1c8eee1144d1f1f33ee094b5b273532dcb5ea1d0df026ae35238d665 not found: ID does not exist" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.229067 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:56:28 crc kubenswrapper[4708]: E0227 17:56:28.229374 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:56:28 crc kubenswrapper[4708]: I0227 17:56:28.257204 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" path="/var/lib/kubelet/pods/7070b4f1-ecc4-41b0-98b6-1fca58160118/volumes" Feb 27 17:56:29 crc kubenswrapper[4708]: E0227 17:56:29.230146 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.840269 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-lsqgf"] Feb 27 17:56:31 crc kubenswrapper[4708]: E0227 17:56:31.841133 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="extract-utilities" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.841146 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="extract-utilities" Feb 27 17:56:31 crc kubenswrapper[4708]: E0227 17:56:31.841166 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="registry-server" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.841174 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="registry-server" Feb 27 17:56:31 crc kubenswrapper[4708]: E0227 17:56:31.841202 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="extract-content" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.841210 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="extract-content" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.841398 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7070b4f1-ecc4-41b0-98b6-1fca58160118" containerName="registry-server" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.842780 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.863221 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsqgf"] Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.930868 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-catalog-content\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.930907 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-utilities\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:31 crc kubenswrapper[4708]: I0227 17:56:31.931007 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zc8bk\" (UniqueName: \"kubernetes.io/projected/1683f070-9dc7-47fd-8f89-4dbace38863c-kube-api-access-zc8bk\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.032663 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zc8bk\" (UniqueName: \"kubernetes.io/projected/1683f070-9dc7-47fd-8f89-4dbace38863c-kube-api-access-zc8bk\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.032832 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-catalog-content\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.032878 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-utilities\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.033399 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-utilities\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.033585 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-catalog-content\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.051117 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zc8bk\" (UniqueName: \"kubernetes.io/projected/1683f070-9dc7-47fd-8f89-4dbace38863c-kube-api-access-zc8bk\") pod \"community-operators-lsqgf\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.167246 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:56:32 crc kubenswrapper[4708]: W0227 17:56:32.678825 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1683f070_9dc7_47fd_8f89_4dbace38863c.slice/crio-b3cbd161cf8be4fbb5911c2c7c51fba8d380eed4df94ef229cef6a5a6268f472 WatchSource:0}: Error finding container b3cbd161cf8be4fbb5911c2c7c51fba8d380eed4df94ef229cef6a5a6268f472: Status 404 returned error can't find the container with id b3cbd161cf8be4fbb5911c2c7c51fba8d380eed4df94ef229cef6a5a6268f472 Feb 27 17:56:32 crc kubenswrapper[4708]: I0227 17:56:32.678940 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-lsqgf"] Feb 27 17:56:33 crc kubenswrapper[4708]: I0227 17:56:33.125155 4708 generic.go:334] "Generic (PLEG): container finished" podID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerID="d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1" exitCode=0 Feb 27 17:56:33 crc kubenswrapper[4708]: I0227 17:56:33.125209 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqgf" event={"ID":"1683f070-9dc7-47fd-8f89-4dbace38863c","Type":"ContainerDied","Data":"d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1"} Feb 27 17:56:33 crc kubenswrapper[4708]: I0227 17:56:33.125529 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqgf" event={"ID":"1683f070-9dc7-47fd-8f89-4dbace38863c","Type":"ContainerStarted","Data":"b3cbd161cf8be4fbb5911c2c7c51fba8d380eed4df94ef229cef6a5a6268f472"} Feb 27 17:56:34 crc kubenswrapper[4708]: E0227 17:56:34.062387 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 17:56:34 crc kubenswrapper[4708]: E0227 17:56:34.062902 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc8bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lsqgf_openshift-marketplace(1683f070-9dc7-47fd-8f89-4dbace38863c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:56:34 crc kubenswrapper[4708]: E0227 17:56:34.064719 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-lsqgf" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" Feb 27 17:56:34 crc kubenswrapper[4708]: E0227 17:56:34.140410 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lsqgf" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" Feb 27 17:56:40 crc kubenswrapper[4708]: I0227 17:56:40.228909 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:56:40 crc kubenswrapper[4708]: E0227 17:56:40.229886 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:56:43 crc kubenswrapper[4708]: I0227 17:56:43.252447 4708 generic.go:334] "Generic (PLEG): container finished" podID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" containerID="4ffd1d36f2c8cb2eb20ccc3022e998a59c6d97b452fdec5bc945601b3705c4e2" exitCode=0 Feb 27 17:56:43 crc kubenswrapper[4708]: I0227 17:56:43.252533 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" event={"ID":"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac","Type":"ContainerDied","Data":"4ffd1d36f2c8cb2eb20ccc3022e998a59c6d97b452fdec5bc945601b3705c4e2"} Feb 27 17:56:44 crc kubenswrapper[4708]: I0227 17:56:44.755860 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" Feb 27 17:56:44 crc kubenswrapper[4708]: I0227 17:56:44.855346 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6w8m2\" (UniqueName: \"kubernetes.io/projected/8e09bd3f-9e27-4168-b9a6-4855dd0dbaac-kube-api-access-6w8m2\") pod \"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac\" (UID: \"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac\") " Feb 27 17:56:44 crc kubenswrapper[4708]: I0227 17:56:44.863778 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e09bd3f-9e27-4168-b9a6-4855dd0dbaac-kube-api-access-6w8m2" (OuterVolumeSpecName: "kube-api-access-6w8m2") pod "8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" (UID: "8e09bd3f-9e27-4168-b9a6-4855dd0dbaac"). InnerVolumeSpecName "kube-api-access-6w8m2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:56:44 crc kubenswrapper[4708]: I0227 17:56:44.958116 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6w8m2\" (UniqueName: \"kubernetes.io/projected/8e09bd3f-9e27-4168-b9a6-4855dd0dbaac-kube-api-access-6w8m2\") on node \"crc\" DevicePath \"\"" Feb 27 17:56:45 crc kubenswrapper[4708]: I0227 17:56:45.279386 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" event={"ID":"8e09bd3f-9e27-4168-b9a6-4855dd0dbaac","Type":"ContainerDied","Data":"b5717ed6114acc87abab53178a91ccb98f95ea1aba551ced1d52814e3cbaf1f5"} Feb 27 17:56:45 crc kubenswrapper[4708]: I0227 17:56:45.279451 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b5717ed6114acc87abab53178a91ccb98f95ea1aba551ced1d52814e3cbaf1f5" Feb 27 17:56:45 crc kubenswrapper[4708]: I0227 17:56:45.279456 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536916-jvlr9" Feb 27 17:56:45 crc kubenswrapper[4708]: I0227 17:56:45.859632 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-gxxhz"] Feb 27 17:56:45 crc kubenswrapper[4708]: I0227 17:56:45.870677 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536910-gxxhz"] Feb 27 17:56:46 crc kubenswrapper[4708]: I0227 17:56:46.243112 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d203876-ad77-46ba-a151-e2af5363930c" path="/var/lib/kubelet/pods/6d203876-ad77-46ba-a151-e2af5363930c/volumes" Feb 27 17:56:48 crc kubenswrapper[4708]: E0227 17:56:48.971239 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 17:56:48 crc kubenswrapper[4708]: E0227 17:56:48.972145 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zc8bk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-lsqgf_openshift-marketplace(1683f070-9dc7-47fd-8f89-4dbace38863c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 17:56:48 crc kubenswrapper[4708]: E0227 17:56:48.973422 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-lsqgf" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" Feb 27 17:56:52 crc kubenswrapper[4708]: I0227 17:56:52.233805 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:56:52 crc kubenswrapper[4708]: E0227 17:56:52.235679 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.720098 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-25j92"] Feb 27 17:56:59 crc kubenswrapper[4708]: E0227 17:56:59.723524 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" containerName="oc" Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.723560 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" containerName="oc" Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.723812 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" containerName="oc" Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.725487 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.739256 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25j92"] Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.911531 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-catalog-content\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.911623 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ncnzv\" (UniqueName: \"kubernetes.io/projected/b9ab2d7d-a27b-485e-87a4-c71865982bee-kube-api-access-ncnzv\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:56:59 crc kubenswrapper[4708]: I0227 17:56:59.911713 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-utilities\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.014064 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-catalog-content\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.014132 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ncnzv\" (UniqueName: \"kubernetes.io/projected/b9ab2d7d-a27b-485e-87a4-c71865982bee-kube-api-access-ncnzv\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.014195 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-utilities\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.015062 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-utilities\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.015094 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-catalog-content\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.034982 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ncnzv\" (UniqueName: \"kubernetes.io/projected/b9ab2d7d-a27b-485e-87a4-c71865982bee-kube-api-access-ncnzv\") pod \"redhat-marketplace-25j92\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.051129 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:00 crc kubenswrapper[4708]: I0227 17:57:00.529677 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-25j92"] Feb 27 17:57:01 crc kubenswrapper[4708]: I0227 17:57:01.485813 4708 generic.go:334] "Generic (PLEG): container finished" podID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerID="2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69" exitCode=0 Feb 27 17:57:01 crc kubenswrapper[4708]: I0227 17:57:01.485906 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25j92" event={"ID":"b9ab2d7d-a27b-485e-87a4-c71865982bee","Type":"ContainerDied","Data":"2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69"} Feb 27 17:57:01 crc kubenswrapper[4708]: I0227 17:57:01.486370 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25j92" event={"ID":"b9ab2d7d-a27b-485e-87a4-c71865982bee","Type":"ContainerStarted","Data":"e3e05394e163e1c5b149d44cb7b1c9d3defa3609b068924f47124ffbb228d3dd"} Feb 27 17:57:02 crc kubenswrapper[4708]: E0227 17:57:02.246900 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-lsqgf" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" Feb 27 17:57:02 crc kubenswrapper[4708]: I0227 17:57:02.497956 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25j92" event={"ID":"b9ab2d7d-a27b-485e-87a4-c71865982bee","Type":"ContainerStarted","Data":"a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279"} Feb 27 17:57:03 crc kubenswrapper[4708]: I0227 17:57:03.054764 4708 scope.go:117] "RemoveContainer" containerID="1ec2352169dc1c3da7f06823b726774e3559e155036a7b581147c01ce5bc1803" Feb 27 17:57:03 crc kubenswrapper[4708]: I0227 17:57:03.516076 4708 generic.go:334] "Generic (PLEG): container finished" podID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerID="a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279" exitCode=0 Feb 27 17:57:03 crc kubenswrapper[4708]: I0227 17:57:03.516131 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25j92" event={"ID":"b9ab2d7d-a27b-485e-87a4-c71865982bee","Type":"ContainerDied","Data":"a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279"} Feb 27 17:57:04 crc kubenswrapper[4708]: I0227 17:57:04.527028 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25j92" event={"ID":"b9ab2d7d-a27b-485e-87a4-c71865982bee","Type":"ContainerStarted","Data":"279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2"} Feb 27 17:57:04 crc kubenswrapper[4708]: I0227 17:57:04.553483 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-25j92" podStartSLOduration=3.054444241 podStartE2EDuration="5.553462664s" podCreationTimestamp="2026-02-27 17:56:59 +0000 UTC" firstStartedPulling="2026-02-27 17:57:01.489010445 +0000 UTC m=+3820.004808072" lastFinishedPulling="2026-02-27 17:57:03.988028878 +0000 UTC m=+3822.503826495" observedRunningTime="2026-02-27 17:57:04.541561139 +0000 UTC m=+3823.057358726" watchObservedRunningTime="2026-02-27 17:57:04.553462664 +0000 UTC m=+3823.069260261" Feb 27 17:57:07 crc kubenswrapper[4708]: I0227 17:57:07.229225 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:57:07 crc kubenswrapper[4708]: E0227 17:57:07.230201 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:57:10 crc kubenswrapper[4708]: I0227 17:57:10.051883 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:10 crc kubenswrapper[4708]: I0227 17:57:10.052346 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:10 crc kubenswrapper[4708]: I0227 17:57:10.123962 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:10 crc kubenswrapper[4708]: I0227 17:57:10.674386 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:10 crc kubenswrapper[4708]: I0227 17:57:10.748788 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25j92"] Feb 27 17:57:12 crc kubenswrapper[4708]: I0227 17:57:12.627594 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-25j92" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="registry-server" containerID="cri-o://279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2" gracePeriod=2 Feb 27 17:57:12 crc kubenswrapper[4708]: E0227 17:57:12.826073 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ab2d7d_a27b_485e_87a4_c71865982bee.slice/crio-279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9ab2d7d_a27b_485e_87a4_c71865982bee.slice/crio-conmon-279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2.scope\": RecentStats: unable to find data in memory cache]" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.169711 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.319397 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-utilities\") pod \"b9ab2d7d-a27b-485e-87a4-c71865982bee\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.319635 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-catalog-content\") pod \"b9ab2d7d-a27b-485e-87a4-c71865982bee\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.319671 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ncnzv\" (UniqueName: \"kubernetes.io/projected/b9ab2d7d-a27b-485e-87a4-c71865982bee-kube-api-access-ncnzv\") pod \"b9ab2d7d-a27b-485e-87a4-c71865982bee\" (UID: \"b9ab2d7d-a27b-485e-87a4-c71865982bee\") " Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.320498 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-utilities" (OuterVolumeSpecName: "utilities") pod "b9ab2d7d-a27b-485e-87a4-c71865982bee" (UID: "b9ab2d7d-a27b-485e-87a4-c71865982bee"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.328739 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ab2d7d-a27b-485e-87a4-c71865982bee-kube-api-access-ncnzv" (OuterVolumeSpecName: "kube-api-access-ncnzv") pod "b9ab2d7d-a27b-485e-87a4-c71865982bee" (UID: "b9ab2d7d-a27b-485e-87a4-c71865982bee"). InnerVolumeSpecName "kube-api-access-ncnzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.345668 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b9ab2d7d-a27b-485e-87a4-c71865982bee" (UID: "b9ab2d7d-a27b-485e-87a4-c71865982bee"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.422393 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.422926 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ncnzv\" (UniqueName: \"kubernetes.io/projected/b9ab2d7d-a27b-485e-87a4-c71865982bee-kube-api-access-ncnzv\") on node \"crc\" DevicePath \"\"" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.423021 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b9ab2d7d-a27b-485e-87a4-c71865982bee-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.638895 4708 generic.go:334] "Generic (PLEG): container finished" podID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerID="279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2" exitCode=0 Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.638996 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25j92" event={"ID":"b9ab2d7d-a27b-485e-87a4-c71865982bee","Type":"ContainerDied","Data":"279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2"} Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.639055 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-25j92" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.639981 4708 scope.go:117] "RemoveContainer" containerID="279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.640983 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-25j92" event={"ID":"b9ab2d7d-a27b-485e-87a4-c71865982bee","Type":"ContainerDied","Data":"e3e05394e163e1c5b149d44cb7b1c9d3defa3609b068924f47124ffbb228d3dd"} Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.669106 4708 scope.go:117] "RemoveContainer" containerID="a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.683559 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-25j92"] Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.699942 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-25j92"] Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.714250 4708 scope.go:117] "RemoveContainer" containerID="2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.743904 4708 scope.go:117] "RemoveContainer" containerID="279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2" Feb 27 17:57:13 crc kubenswrapper[4708]: E0227 17:57:13.744437 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2\": container with ID starting with 279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2 not found: ID does not exist" containerID="279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.744559 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2"} err="failed to get container status \"279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2\": rpc error: code = NotFound desc = could not find container \"279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2\": container with ID starting with 279770ce277625a9f34f7726d75f5c544a2b6fcae23d21c673d93f3edb2fcbc2 not found: ID does not exist" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.744648 4708 scope.go:117] "RemoveContainer" containerID="a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279" Feb 27 17:57:13 crc kubenswrapper[4708]: E0227 17:57:13.745312 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279\": container with ID starting with a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279 not found: ID does not exist" containerID="a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.745357 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279"} err="failed to get container status \"a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279\": rpc error: code = NotFound desc = could not find container \"a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279\": container with ID starting with a76c6ffc74631f369e2dc3ae3e7bd9173c9e71413dbd5e3c597005dc6ce9a279 not found: ID does not exist" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.745390 4708 scope.go:117] "RemoveContainer" containerID="2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69" Feb 27 17:57:13 crc kubenswrapper[4708]: E0227 17:57:13.745785 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69\": container with ID starting with 2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69 not found: ID does not exist" containerID="2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69" Feb 27 17:57:13 crc kubenswrapper[4708]: I0227 17:57:13.745895 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69"} err="failed to get container status \"2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69\": rpc error: code = NotFound desc = could not find container \"2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69\": container with ID starting with 2f768b01342562d229fd570a8ac396b69f3ab762adc9908df79649bef26d3a69 not found: ID does not exist" Feb 27 17:57:14 crc kubenswrapper[4708]: I0227 17:57:14.249207 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" path="/var/lib/kubelet/pods/b9ab2d7d-a27b-485e-87a4-c71865982bee/volumes" Feb 27 17:57:18 crc kubenswrapper[4708]: I0227 17:57:18.228808 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:57:18 crc kubenswrapper[4708]: E0227 17:57:18.229480 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:57:18 crc kubenswrapper[4708]: I0227 17:57:18.705886 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqgf" event={"ID":"1683f070-9dc7-47fd-8f89-4dbace38863c","Type":"ContainerStarted","Data":"9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373"} Feb 27 17:57:20 crc kubenswrapper[4708]: I0227 17:57:20.726939 4708 generic.go:334] "Generic (PLEG): container finished" podID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerID="9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373" exitCode=0 Feb 27 17:57:20 crc kubenswrapper[4708]: I0227 17:57:20.726988 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqgf" event={"ID":"1683f070-9dc7-47fd-8f89-4dbace38863c","Type":"ContainerDied","Data":"9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373"} Feb 27 17:57:21 crc kubenswrapper[4708]: I0227 17:57:21.745717 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqgf" event={"ID":"1683f070-9dc7-47fd-8f89-4dbace38863c","Type":"ContainerStarted","Data":"35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640"} Feb 27 17:57:21 crc kubenswrapper[4708]: I0227 17:57:21.774010 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-lsqgf" podStartSLOduration=2.698845874 podStartE2EDuration="50.773995658s" podCreationTimestamp="2026-02-27 17:56:31 +0000 UTC" firstStartedPulling="2026-02-27 17:56:33.128536051 +0000 UTC m=+3791.644333638" lastFinishedPulling="2026-02-27 17:57:21.203685835 +0000 UTC m=+3839.719483422" observedRunningTime="2026-02-27 17:57:21.765378475 +0000 UTC m=+3840.281176062" watchObservedRunningTime="2026-02-27 17:57:21.773995658 +0000 UTC m=+3840.289793245" Feb 27 17:57:22 crc kubenswrapper[4708]: I0227 17:57:22.167996 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:57:22 crc kubenswrapper[4708]: I0227 17:57:22.168039 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:57:23 crc kubenswrapper[4708]: I0227 17:57:23.249070 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-lsqgf" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="registry-server" probeResult="failure" output=< Feb 27 17:57:23 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 17:57:23 crc kubenswrapper[4708]: > Feb 27 17:57:32 crc kubenswrapper[4708]: I0227 17:57:32.245153 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:57:32 crc kubenswrapper[4708]: I0227 17:57:32.245312 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:57:32 crc kubenswrapper[4708]: E0227 17:57:32.245986 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:57:32 crc kubenswrapper[4708]: I0227 17:57:32.319314 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:57:32 crc kubenswrapper[4708]: I0227 17:57:32.481600 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lsqgf"] Feb 27 17:57:33 crc kubenswrapper[4708]: I0227 17:57:33.880209 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-lsqgf" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="registry-server" containerID="cri-o://35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640" gracePeriod=2 Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.484069 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.591782 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-catalog-content\") pod \"1683f070-9dc7-47fd-8f89-4dbace38863c\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.592382 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-utilities\") pod \"1683f070-9dc7-47fd-8f89-4dbace38863c\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.592413 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zc8bk\" (UniqueName: \"kubernetes.io/projected/1683f070-9dc7-47fd-8f89-4dbace38863c-kube-api-access-zc8bk\") pod \"1683f070-9dc7-47fd-8f89-4dbace38863c\" (UID: \"1683f070-9dc7-47fd-8f89-4dbace38863c\") " Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.593841 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-utilities" (OuterVolumeSpecName: "utilities") pod "1683f070-9dc7-47fd-8f89-4dbace38863c" (UID: "1683f070-9dc7-47fd-8f89-4dbace38863c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.598742 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1683f070-9dc7-47fd-8f89-4dbace38863c-kube-api-access-zc8bk" (OuterVolumeSpecName: "kube-api-access-zc8bk") pod "1683f070-9dc7-47fd-8f89-4dbace38863c" (UID: "1683f070-9dc7-47fd-8f89-4dbace38863c"). InnerVolumeSpecName "kube-api-access-zc8bk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.656650 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1683f070-9dc7-47fd-8f89-4dbace38863c" (UID: "1683f070-9dc7-47fd-8f89-4dbace38863c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.698669 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.699864 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zc8bk\" (UniqueName: \"kubernetes.io/projected/1683f070-9dc7-47fd-8f89-4dbace38863c-kube-api-access-zc8bk\") on node \"crc\" DevicePath \"\"" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.699883 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1683f070-9dc7-47fd-8f89-4dbace38863c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.892489 4708 generic.go:334] "Generic (PLEG): container finished" podID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerID="35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640" exitCode=0 Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.892531 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqgf" event={"ID":"1683f070-9dc7-47fd-8f89-4dbace38863c","Type":"ContainerDied","Data":"35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640"} Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.892557 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-lsqgf" event={"ID":"1683f070-9dc7-47fd-8f89-4dbace38863c","Type":"ContainerDied","Data":"b3cbd161cf8be4fbb5911c2c7c51fba8d380eed4df94ef229cef6a5a6268f472"} Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.892573 4708 scope.go:117] "RemoveContainer" containerID="35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.892698 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-lsqgf" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.936632 4708 scope.go:117] "RemoveContainer" containerID="9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373" Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.945254 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-lsqgf"] Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.953833 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-lsqgf"] Feb 27 17:57:34 crc kubenswrapper[4708]: I0227 17:57:34.973146 4708 scope.go:117] "RemoveContainer" containerID="d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1" Feb 27 17:57:35 crc kubenswrapper[4708]: I0227 17:57:35.017342 4708 scope.go:117] "RemoveContainer" containerID="35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640" Feb 27 17:57:35 crc kubenswrapper[4708]: E0227 17:57:35.017823 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640\": container with ID starting with 35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640 not found: ID does not exist" containerID="35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640" Feb 27 17:57:35 crc kubenswrapper[4708]: I0227 17:57:35.017891 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640"} err="failed to get container status \"35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640\": rpc error: code = NotFound desc = could not find container \"35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640\": container with ID starting with 35b843f1371b64455523422bbe0098443c5e83a0646b35325ddfd4d8807aa640 not found: ID does not exist" Feb 27 17:57:35 crc kubenswrapper[4708]: I0227 17:57:35.017923 4708 scope.go:117] "RemoveContainer" containerID="9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373" Feb 27 17:57:35 crc kubenswrapper[4708]: E0227 17:57:35.018597 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373\": container with ID starting with 9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373 not found: ID does not exist" containerID="9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373" Feb 27 17:57:35 crc kubenswrapper[4708]: I0227 17:57:35.018632 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373"} err="failed to get container status \"9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373\": rpc error: code = NotFound desc = could not find container \"9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373\": container with ID starting with 9895460300cf1ca6183a0a4b8b32fdf92b4bf6a2392a033532b4ed8de0b99373 not found: ID does not exist" Feb 27 17:57:35 crc kubenswrapper[4708]: I0227 17:57:35.018656 4708 scope.go:117] "RemoveContainer" containerID="d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1" Feb 27 17:57:35 crc kubenswrapper[4708]: E0227 17:57:35.019252 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1\": container with ID starting with d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1 not found: ID does not exist" containerID="d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1" Feb 27 17:57:35 crc kubenswrapper[4708]: I0227 17:57:35.019278 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1"} err="failed to get container status \"d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1\": rpc error: code = NotFound desc = could not find container \"d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1\": container with ID starting with d077534de4388c0121074994b018ee50f36eb02748437f396ac2f965dfc53dd1 not found: ID does not exist" Feb 27 17:57:36 crc kubenswrapper[4708]: I0227 17:57:36.250331 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" path="/var/lib/kubelet/pods/1683f070-9dc7-47fd-8f89-4dbace38863c/volumes" Feb 27 17:57:47 crc kubenswrapper[4708]: I0227 17:57:47.228650 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:57:47 crc kubenswrapper[4708]: E0227 17:57:47.229610 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:57:59 crc kubenswrapper[4708]: I0227 17:57:59.228626 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:57:59 crc kubenswrapper[4708]: E0227 17:57:59.229428 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.156307 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536918-qhzmk"] Feb 27 17:58:00 crc kubenswrapper[4708]: E0227 17:58:00.156952 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="extract-content" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.156984 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="extract-content" Feb 27 17:58:00 crc kubenswrapper[4708]: E0227 17:58:00.157009 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="extract-utilities" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.157020 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="extract-utilities" Feb 27 17:58:00 crc kubenswrapper[4708]: E0227 17:58:00.157041 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="extract-utilities" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.157051 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="extract-utilities" Feb 27 17:58:00 crc kubenswrapper[4708]: E0227 17:58:00.157081 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="registry-server" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.157093 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="registry-server" Feb 27 17:58:00 crc kubenswrapper[4708]: E0227 17:58:00.157115 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="extract-content" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.157126 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="extract-content" Feb 27 17:58:00 crc kubenswrapper[4708]: E0227 17:58:00.157166 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="registry-server" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.157179 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="registry-server" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.157490 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="1683f070-9dc7-47fd-8f89-4dbace38863c" containerName="registry-server" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.157527 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ab2d7d-a27b-485e-87a4-c71865982bee" containerName="registry-server" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.158803 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-qhzmk" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.160966 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.162126 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.162378 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.171161 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-qhzmk"] Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.291294 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vq969\" (UniqueName: \"kubernetes.io/projected/13cfe6fb-32f4-4bb0-86b3-4096b99fd489-kube-api-access-vq969\") pod \"auto-csr-approver-29536918-qhzmk\" (UID: \"13cfe6fb-32f4-4bb0-86b3-4096b99fd489\") " pod="openshift-infra/auto-csr-approver-29536918-qhzmk" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.394803 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vq969\" (UniqueName: \"kubernetes.io/projected/13cfe6fb-32f4-4bb0-86b3-4096b99fd489-kube-api-access-vq969\") pod \"auto-csr-approver-29536918-qhzmk\" (UID: \"13cfe6fb-32f4-4bb0-86b3-4096b99fd489\") " pod="openshift-infra/auto-csr-approver-29536918-qhzmk" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.426960 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vq969\" (UniqueName: \"kubernetes.io/projected/13cfe6fb-32f4-4bb0-86b3-4096b99fd489-kube-api-access-vq969\") pod \"auto-csr-approver-29536918-qhzmk\" (UID: \"13cfe6fb-32f4-4bb0-86b3-4096b99fd489\") " pod="openshift-infra/auto-csr-approver-29536918-qhzmk" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.491703 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-qhzmk" Feb 27 17:58:00 crc kubenswrapper[4708]: I0227 17:58:00.998127 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-qhzmk"] Feb 27 17:58:01 crc kubenswrapper[4708]: I0227 17:58:01.191679 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536918-qhzmk" event={"ID":"13cfe6fb-32f4-4bb0-86b3-4096b99fd489","Type":"ContainerStarted","Data":"a35b9fc2b326a81818f82b1c5ee0fe47dc0062c0d706cc0dcbf3e0eb5ac50e86"} Feb 27 17:58:03 crc kubenswrapper[4708]: I0227 17:58:03.159367 4708 scope.go:117] "RemoveContainer" containerID="1edc4084819b9eb16d611f08e6981dc85027a46adb7eb874e493475693498c2e" Feb 27 17:58:03 crc kubenswrapper[4708]: I0227 17:58:03.217732 4708 generic.go:334] "Generic (PLEG): container finished" podID="13cfe6fb-32f4-4bb0-86b3-4096b99fd489" containerID="640720e0093d79ef1b87156536370f5f815e50ef5f648ba7d5ee674bace00d2a" exitCode=0 Feb 27 17:58:03 crc kubenswrapper[4708]: I0227 17:58:03.217823 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536918-qhzmk" event={"ID":"13cfe6fb-32f4-4bb0-86b3-4096b99fd489","Type":"ContainerDied","Data":"640720e0093d79ef1b87156536370f5f815e50ef5f648ba7d5ee674bace00d2a"} Feb 27 17:58:04 crc kubenswrapper[4708]: I0227 17:58:04.652458 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-qhzmk" Feb 27 17:58:04 crc kubenswrapper[4708]: I0227 17:58:04.798172 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vq969\" (UniqueName: \"kubernetes.io/projected/13cfe6fb-32f4-4bb0-86b3-4096b99fd489-kube-api-access-vq969\") pod \"13cfe6fb-32f4-4bb0-86b3-4096b99fd489\" (UID: \"13cfe6fb-32f4-4bb0-86b3-4096b99fd489\") " Feb 27 17:58:04 crc kubenswrapper[4708]: I0227 17:58:04.804649 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13cfe6fb-32f4-4bb0-86b3-4096b99fd489-kube-api-access-vq969" (OuterVolumeSpecName: "kube-api-access-vq969") pod "13cfe6fb-32f4-4bb0-86b3-4096b99fd489" (UID: "13cfe6fb-32f4-4bb0-86b3-4096b99fd489"). InnerVolumeSpecName "kube-api-access-vq969". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 17:58:04 crc kubenswrapper[4708]: I0227 17:58:04.901779 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vq969\" (UniqueName: \"kubernetes.io/projected/13cfe6fb-32f4-4bb0-86b3-4096b99fd489-kube-api-access-vq969\") on node \"crc\" DevicePath \"\"" Feb 27 17:58:05 crc kubenswrapper[4708]: I0227 17:58:05.246281 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536918-qhzmk" event={"ID":"13cfe6fb-32f4-4bb0-86b3-4096b99fd489","Type":"ContainerDied","Data":"a35b9fc2b326a81818f82b1c5ee0fe47dc0062c0d706cc0dcbf3e0eb5ac50e86"} Feb 27 17:58:05 crc kubenswrapper[4708]: I0227 17:58:05.246331 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a35b9fc2b326a81818f82b1c5ee0fe47dc0062c0d706cc0dcbf3e0eb5ac50e86" Feb 27 17:58:05 crc kubenswrapper[4708]: I0227 17:58:05.246414 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536918-qhzmk" Feb 27 17:58:05 crc kubenswrapper[4708]: I0227 17:58:05.722441 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-fvtmd"] Feb 27 17:58:05 crc kubenswrapper[4708]: I0227 17:58:05.734977 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536912-fvtmd"] Feb 27 17:58:06 crc kubenswrapper[4708]: I0227 17:58:06.244400 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb824baf-4fac-4634-85d1-d126cb326116" path="/var/lib/kubelet/pods/eb824baf-4fac-4634-85d1-d126cb326116/volumes" Feb 27 17:58:13 crc kubenswrapper[4708]: I0227 17:58:13.229550 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:58:13 crc kubenswrapper[4708]: E0227 17:58:13.230430 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:58:28 crc kubenswrapper[4708]: I0227 17:58:28.229272 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:58:28 crc kubenswrapper[4708]: E0227 17:58:28.230451 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:58:40 crc kubenswrapper[4708]: I0227 17:58:40.229182 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:58:40 crc kubenswrapper[4708]: E0227 17:58:40.230354 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:58:52 crc kubenswrapper[4708]: I0227 17:58:52.235734 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:58:52 crc kubenswrapper[4708]: E0227 17:58:52.236508 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:59:03 crc kubenswrapper[4708]: I0227 17:59:03.235307 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:59:03 crc kubenswrapper[4708]: E0227 17:59:03.236585 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:59:03 crc kubenswrapper[4708]: I0227 17:59:03.294755 4708 scope.go:117] "RemoveContainer" containerID="28d0e25079553eacf5e2b7ab66a909e110b6de80827b7bf13e1405c8256528d8" Feb 27 17:59:16 crc kubenswrapper[4708]: I0227 17:59:16.228978 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:59:16 crc kubenswrapper[4708]: E0227 17:59:16.229666 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:59:29 crc kubenswrapper[4708]: I0227 17:59:29.229228 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:59:29 crc kubenswrapper[4708]: E0227 17:59:29.231168 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:59:44 crc kubenswrapper[4708]: I0227 17:59:44.229188 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:59:44 crc kubenswrapper[4708]: E0227 17:59:44.231781 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 17:59:58 crc kubenswrapper[4708]: I0227 17:59:58.229193 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 17:59:58 crc kubenswrapper[4708]: E0227 17:59:58.230149 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.181384 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536920-qvr9v"] Feb 27 18:00:00 crc kubenswrapper[4708]: E0227 18:00:00.182550 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13cfe6fb-32f4-4bb0-86b3-4096b99fd489" containerName="oc" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.182576 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="13cfe6fb-32f4-4bb0-86b3-4096b99fd489" containerName="oc" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.183065 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="13cfe6fb-32f4-4bb0-86b3-4096b99fd489" containerName="oc" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.194420 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.222515 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.222742 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.224299 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.253704 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv"] Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.255522 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-qvr9v"] Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.255618 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.257318 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.257459 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.265018 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv"] Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.336143 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brqqn\" (UniqueName: \"kubernetes.io/projected/37134f16-3dd4-4d15-8848-5674ca11e392-kube-api-access-brqqn\") pod \"auto-csr-approver-29536920-qvr9v\" (UID: \"37134f16-3dd4-4d15-8848-5674ca11e392\") " pod="openshift-infra/auto-csr-approver-29536920-qvr9v" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.438739 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-secret-volume\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.439144 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qbqg\" (UniqueName: \"kubernetes.io/projected/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-kube-api-access-5qbqg\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.439572 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brqqn\" (UniqueName: \"kubernetes.io/projected/37134f16-3dd4-4d15-8848-5674ca11e392-kube-api-access-brqqn\") pod \"auto-csr-approver-29536920-qvr9v\" (UID: \"37134f16-3dd4-4d15-8848-5674ca11e392\") " pod="openshift-infra/auto-csr-approver-29536920-qvr9v" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.439687 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-config-volume\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.469009 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brqqn\" (UniqueName: \"kubernetes.io/projected/37134f16-3dd4-4d15-8848-5674ca11e392-kube-api-access-brqqn\") pod \"auto-csr-approver-29536920-qvr9v\" (UID: \"37134f16-3dd4-4d15-8848-5674ca11e392\") " pod="openshift-infra/auto-csr-approver-29536920-qvr9v" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.540064 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.541800 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qbqg\" (UniqueName: \"kubernetes.io/projected/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-kube-api-access-5qbqg\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.542056 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-config-volume\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.542208 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-secret-volume\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.543674 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-config-volume\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.553055 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-secret-volume\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.568833 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qbqg\" (UniqueName: \"kubernetes.io/projected/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-kube-api-access-5qbqg\") pod \"collect-profiles-29536920-2zcgv\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:00 crc kubenswrapper[4708]: I0227 18:00:00.572295 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:01 crc kubenswrapper[4708]: I0227 18:00:01.026898 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-qvr9v"] Feb 27 18:00:01 crc kubenswrapper[4708]: W0227 18:00:01.028326 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37134f16_3dd4_4d15_8848_5674ca11e392.slice/crio-e139ccae8e4f339915e51f8179f108de99fef8b30a8c094ec58d405c471a47f6 WatchSource:0}: Error finding container e139ccae8e4f339915e51f8179f108de99fef8b30a8c094ec58d405c471a47f6: Status 404 returned error can't find the container with id e139ccae8e4f339915e51f8179f108de99fef8b30a8c094ec58d405c471a47f6 Feb 27 18:00:01 crc kubenswrapper[4708]: I0227 18:00:01.141546 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv"] Feb 27 18:00:01 crc kubenswrapper[4708]: W0227 18:00:01.146302 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c2e5bdc_0049_41dc_8cc7_cfb16ed96b4f.slice/crio-9965ac7efb4f5ae6d2b28f6997270aa32029159e47e2f62d11d8c78588036f37 WatchSource:0}: Error finding container 9965ac7efb4f5ae6d2b28f6997270aa32029159e47e2f62d11d8c78588036f37: Status 404 returned error can't find the container with id 9965ac7efb4f5ae6d2b28f6997270aa32029159e47e2f62d11d8c78588036f37 Feb 27 18:00:01 crc kubenswrapper[4708]: I0227 18:00:01.609094 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" event={"ID":"37134f16-3dd4-4d15-8848-5674ca11e392","Type":"ContainerStarted","Data":"e139ccae8e4f339915e51f8179f108de99fef8b30a8c094ec58d405c471a47f6"} Feb 27 18:00:01 crc kubenswrapper[4708]: I0227 18:00:01.610387 4708 generic.go:334] "Generic (PLEG): container finished" podID="8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" containerID="3f5676a035056e250c17dc3330b09159457e45c52441a457959606a4d006da1e" exitCode=0 Feb 27 18:00:01 crc kubenswrapper[4708]: I0227 18:00:01.610424 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" event={"ID":"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f","Type":"ContainerDied","Data":"3f5676a035056e250c17dc3330b09159457e45c52441a457959606a4d006da1e"} Feb 27 18:00:01 crc kubenswrapper[4708]: I0227 18:00:01.610444 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" event={"ID":"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f","Type":"ContainerStarted","Data":"9965ac7efb4f5ae6d2b28f6997270aa32029159e47e2f62d11d8c78588036f37"} Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.093786 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.201093 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-secret-volume\") pod \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.201219 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qbqg\" (UniqueName: \"kubernetes.io/projected/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-kube-api-access-5qbqg\") pod \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.201301 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-config-volume\") pod \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\" (UID: \"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f\") " Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.202178 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-config-volume" (OuterVolumeSpecName: "config-volume") pod "8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" (UID: "8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.209693 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-kube-api-access-5qbqg" (OuterVolumeSpecName: "kube-api-access-5qbqg") pod "8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" (UID: "8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f"). InnerVolumeSpecName "kube-api-access-5qbqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.210134 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" (UID: "8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.304714 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.304826 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qbqg\" (UniqueName: \"kubernetes.io/projected/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-kube-api-access-5qbqg\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.304865 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.636531 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" event={"ID":"8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f","Type":"ContainerDied","Data":"9965ac7efb4f5ae6d2b28f6997270aa32029159e47e2f62d11d8c78588036f37"} Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.636582 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9965ac7efb4f5ae6d2b28f6997270aa32029159e47e2f62d11d8c78588036f37" Feb 27 18:00:03 crc kubenswrapper[4708]: I0227 18:00:03.636884 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv" Feb 27 18:00:04 crc kubenswrapper[4708]: I0227 18:00:04.231903 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt"] Feb 27 18:00:04 crc kubenswrapper[4708]: I0227 18:00:04.259764 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536875-lrdlt"] Feb 27 18:00:05 crc kubenswrapper[4708]: I0227 18:00:05.674389 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" event={"ID":"37134f16-3dd4-4d15-8848-5674ca11e392","Type":"ContainerStarted","Data":"6d4326b7f75356fb2aea4e833eda1c6f545ac34d2fe41355c0bcce38a03786cc"} Feb 27 18:00:05 crc kubenswrapper[4708]: I0227 18:00:05.697176 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" podStartSLOduration=2.433983951 podStartE2EDuration="5.697158376s" podCreationTimestamp="2026-02-27 18:00:00 +0000 UTC" firstStartedPulling="2026-02-27 18:00:01.030791094 +0000 UTC m=+3999.546588691" lastFinishedPulling="2026-02-27 18:00:04.293965519 +0000 UTC m=+4002.809763116" observedRunningTime="2026-02-27 18:00:05.690635212 +0000 UTC m=+4004.206432799" watchObservedRunningTime="2026-02-27 18:00:05.697158376 +0000 UTC m=+4004.212955963" Feb 27 18:00:05 crc kubenswrapper[4708]: I0227 18:00:05.931430 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-distributor-grpc" Feb 27 18:00:06 crc kubenswrapper[4708]: I0227 18:00:06.240938 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae8ae876-ebb6-4de9-bacc-1efece3d20a0" path="/var/lib/kubelet/pods/ae8ae876-ebb6-4de9-bacc-1efece3d20a0/volumes" Feb 27 18:00:06 crc kubenswrapper[4708]: I0227 18:00:06.684276 4708 generic.go:334] "Generic (PLEG): container finished" podID="37134f16-3dd4-4d15-8848-5674ca11e392" containerID="6d4326b7f75356fb2aea4e833eda1c6f545ac34d2fe41355c0bcce38a03786cc" exitCode=0 Feb 27 18:00:06 crc kubenswrapper[4708]: I0227 18:00:06.684311 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" event={"ID":"37134f16-3dd4-4d15-8848-5674ca11e392","Type":"ContainerDied","Data":"6d4326b7f75356fb2aea4e833eda1c6f545ac34d2fe41355c0bcce38a03786cc"} Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.209721 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.311687 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brqqn\" (UniqueName: \"kubernetes.io/projected/37134f16-3dd4-4d15-8848-5674ca11e392-kube-api-access-brqqn\") pod \"37134f16-3dd4-4d15-8848-5674ca11e392\" (UID: \"37134f16-3dd4-4d15-8848-5674ca11e392\") " Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.321287 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37134f16-3dd4-4d15-8848-5674ca11e392-kube-api-access-brqqn" (OuterVolumeSpecName: "kube-api-access-brqqn") pod "37134f16-3dd4-4d15-8848-5674ca11e392" (UID: "37134f16-3dd4-4d15-8848-5674ca11e392"). InnerVolumeSpecName "kube-api-access-brqqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.414580 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brqqn\" (UniqueName: \"kubernetes.io/projected/37134f16-3dd4-4d15-8848-5674ca11e392-kube-api-access-brqqn\") on node \"crc\" DevicePath \"\"" Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.705109 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" event={"ID":"37134f16-3dd4-4d15-8848-5674ca11e392","Type":"ContainerDied","Data":"e139ccae8e4f339915e51f8179f108de99fef8b30a8c094ec58d405c471a47f6"} Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.705158 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e139ccae8e4f339915e51f8179f108de99fef8b30a8c094ec58d405c471a47f6" Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.705177 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536920-qvr9v" Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.745406 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-spkws"] Feb 27 18:00:08 crc kubenswrapper[4708]: I0227 18:00:08.753751 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536914-spkws"] Feb 27 18:00:10 crc kubenswrapper[4708]: I0227 18:00:10.228902 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 18:00:10 crc kubenswrapper[4708]: E0227 18:00:10.229840 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:00:10 crc kubenswrapper[4708]: I0227 18:00:10.246325 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8cd8181-76b2-4bed-b9a2-7e175bfa46bc" path="/var/lib/kubelet/pods/e8cd8181-76b2-4bed-b9a2-7e175bfa46bc/volumes" Feb 27 18:00:22 crc kubenswrapper[4708]: I0227 18:00:22.242444 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 18:00:22 crc kubenswrapper[4708]: E0227 18:00:22.243666 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:00:34 crc kubenswrapper[4708]: I0227 18:00:34.228527 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 18:00:34 crc kubenswrapper[4708]: E0227 18:00:34.229377 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:00:45 crc kubenswrapper[4708]: I0227 18:00:45.228822 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 18:00:46 crc kubenswrapper[4708]: I0227 18:00:46.159835 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"2032b2d772007daf5716b35cf3a58fa4ff30ba2a5de11ac4ec08c8034d52d619"} Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.157386 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29536921-w8pnk"] Feb 27 18:01:00 crc kubenswrapper[4708]: E0227 18:01:00.158390 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37134f16-3dd4-4d15-8848-5674ca11e392" containerName="oc" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.158406 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="37134f16-3dd4-4d15-8848-5674ca11e392" containerName="oc" Feb 27 18:01:00 crc kubenswrapper[4708]: E0227 18:01:00.158451 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" containerName="collect-profiles" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.158459 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" containerName="collect-profiles" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.158730 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" containerName="collect-profiles" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.158785 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="37134f16-3dd4-4d15-8848-5674ca11e392" containerName="oc" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.161429 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.171266 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536921-w8pnk"] Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.194641 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87l8q\" (UniqueName: \"kubernetes.io/projected/d3b26d5e-d907-420b-b4be-bdb12fd169e7-kube-api-access-87l8q\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.195513 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-config-data\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.195956 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-fernet-keys\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.196738 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-combined-ca-bundle\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.300247 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87l8q\" (UniqueName: \"kubernetes.io/projected/d3b26d5e-d907-420b-b4be-bdb12fd169e7-kube-api-access-87l8q\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.300652 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-config-data\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.300715 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-fernet-keys\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.300751 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-combined-ca-bundle\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.307790 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-combined-ca-bundle\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.307990 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-fernet-keys\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.308382 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-config-data\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.327275 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87l8q\" (UniqueName: \"kubernetes.io/projected/d3b26d5e-d907-420b-b4be-bdb12fd169e7-kube-api-access-87l8q\") pod \"keystone-cron-29536921-w8pnk\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.491778 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:00 crc kubenswrapper[4708]: I0227 18:01:00.978513 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536921-w8pnk"] Feb 27 18:01:02 crc kubenswrapper[4708]: I0227 18:01:02.348645 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-w8pnk" event={"ID":"d3b26d5e-d907-420b-b4be-bdb12fd169e7","Type":"ContainerStarted","Data":"9aca945682116ce2097dde3746330227411fe89b9e4c5dc2fb4f592c0ae145fd"} Feb 27 18:01:02 crc kubenswrapper[4708]: I0227 18:01:02.349117 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-w8pnk" event={"ID":"d3b26d5e-d907-420b-b4be-bdb12fd169e7","Type":"ContainerStarted","Data":"5e734559d29d5f41248b74775962dfb28d9857df363ad5f74196629391fea5bb"} Feb 27 18:01:02 crc kubenswrapper[4708]: I0227 18:01:02.367193 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29536921-w8pnk" podStartSLOduration=2.367154909 podStartE2EDuration="2.367154909s" podCreationTimestamp="2026-02-27 18:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:01:02.36362895 +0000 UTC m=+4060.879426547" watchObservedRunningTime="2026-02-27 18:01:02.367154909 +0000 UTC m=+4060.882952506" Feb 27 18:01:03 crc kubenswrapper[4708]: I0227 18:01:03.401065 4708 scope.go:117] "RemoveContainer" containerID="ac1b234213245f1f46ffc714db2dc099c6d649f060ca88631451d30143106929" Feb 27 18:01:03 crc kubenswrapper[4708]: I0227 18:01:03.452030 4708 scope.go:117] "RemoveContainer" containerID="f10e8f69b87946145636ad505915d1f4f02d31bf0c709914f48cb1918f55cf6c" Feb 27 18:01:05 crc kubenswrapper[4708]: I0227 18:01:05.394064 4708 generic.go:334] "Generic (PLEG): container finished" podID="d3b26d5e-d907-420b-b4be-bdb12fd169e7" containerID="9aca945682116ce2097dde3746330227411fe89b9e4c5dc2fb4f592c0ae145fd" exitCode=0 Feb 27 18:01:05 crc kubenswrapper[4708]: I0227 18:01:05.394182 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-w8pnk" event={"ID":"d3b26d5e-d907-420b-b4be-bdb12fd169e7","Type":"ContainerDied","Data":"9aca945682116ce2097dde3746330227411fe89b9e4c5dc2fb4f592c0ae145fd"} Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.922566 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.944865 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-fernet-keys\") pod \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.944915 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87l8q\" (UniqueName: \"kubernetes.io/projected/d3b26d5e-d907-420b-b4be-bdb12fd169e7-kube-api-access-87l8q\") pod \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.944980 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-config-data\") pod \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.945062 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-combined-ca-bundle\") pod \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\" (UID: \"d3b26d5e-d907-420b-b4be-bdb12fd169e7\") " Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.956054 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "d3b26d5e-d907-420b-b4be-bdb12fd169e7" (UID: "d3b26d5e-d907-420b-b4be-bdb12fd169e7"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.956117 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3b26d5e-d907-420b-b4be-bdb12fd169e7-kube-api-access-87l8q" (OuterVolumeSpecName: "kube-api-access-87l8q") pod "d3b26d5e-d907-420b-b4be-bdb12fd169e7" (UID: "d3b26d5e-d907-420b-b4be-bdb12fd169e7"). InnerVolumeSpecName "kube-api-access-87l8q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:01:06 crc kubenswrapper[4708]: I0227 18:01:06.979223 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3b26d5e-d907-420b-b4be-bdb12fd169e7" (UID: "d3b26d5e-d907-420b-b4be-bdb12fd169e7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.032394 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-config-data" (OuterVolumeSpecName: "config-data") pod "d3b26d5e-d907-420b-b4be-bdb12fd169e7" (UID: "d3b26d5e-d907-420b-b4be-bdb12fd169e7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.048037 4708 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.048076 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87l8q\" (UniqueName: \"kubernetes.io/projected/d3b26d5e-d907-420b-b4be-bdb12fd169e7-kube-api-access-87l8q\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.048091 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.048104 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3b26d5e-d907-420b-b4be-bdb12fd169e7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.418961 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536921-w8pnk" event={"ID":"d3b26d5e-d907-420b-b4be-bdb12fd169e7","Type":"ContainerDied","Data":"5e734559d29d5f41248b74775962dfb28d9857df363ad5f74196629391fea5bb"} Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.419329 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5e734559d29d5f41248b74775962dfb28d9857df363ad5f74196629391fea5bb" Feb 27 18:01:07 crc kubenswrapper[4708]: I0227 18:01:07.419068 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536921-w8pnk" Feb 27 18:01:40 crc kubenswrapper[4708]: I0227 18:01:40.993199 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-dhwkc"] Feb 27 18:01:40 crc kubenswrapper[4708]: E0227 18:01:40.994163 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3b26d5e-d907-420b-b4be-bdb12fd169e7" containerName="keystone-cron" Feb 27 18:01:40 crc kubenswrapper[4708]: I0227 18:01:40.994178 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3b26d5e-d907-420b-b4be-bdb12fd169e7" containerName="keystone-cron" Feb 27 18:01:40 crc kubenswrapper[4708]: I0227 18:01:40.994412 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3b26d5e-d907-420b-b4be-bdb12fd169e7" containerName="keystone-cron" Feb 27 18:01:40 crc kubenswrapper[4708]: I0227 18:01:40.996190 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.016225 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhwkc"] Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.128428 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-catalog-content\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.128606 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-utilities\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.128652 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfpmf\" (UniqueName: \"kubernetes.io/projected/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-kube-api-access-sfpmf\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.230223 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-catalog-content\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.230363 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-utilities\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.230413 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfpmf\" (UniqueName: \"kubernetes.io/projected/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-kube-api-access-sfpmf\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.230888 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-utilities\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.231117 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-catalog-content\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.327075 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfpmf\" (UniqueName: \"kubernetes.io/projected/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-kube-api-access-sfpmf\") pod \"redhat-operators-dhwkc\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.334491 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:01:41 crc kubenswrapper[4708]: I0227 18:01:41.822225 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-dhwkc"] Feb 27 18:01:42 crc kubenswrapper[4708]: I0227 18:01:42.848828 4708 generic.go:334] "Generic (PLEG): container finished" podID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerID="97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e" exitCode=0 Feb 27 18:01:42 crc kubenswrapper[4708]: I0227 18:01:42.848913 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhwkc" event={"ID":"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236","Type":"ContainerDied","Data":"97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e"} Feb 27 18:01:42 crc kubenswrapper[4708]: I0227 18:01:42.849126 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhwkc" event={"ID":"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236","Type":"ContainerStarted","Data":"0ea8262931287b70b2563cb81396b9a5584e188ea7ffcc25c8338195602c10c4"} Feb 27 18:01:42 crc kubenswrapper[4708]: I0227 18:01:42.850862 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:01:43 crc kubenswrapper[4708]: E0227 18:01:43.729890 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:01:43 crc kubenswrapper[4708]: E0227 18:01:43.730301 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfpmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dhwkc_openshift-marketplace(28ad2711-e74a-40f9-8dc5-d2bd2f2ca236): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:01:43 crc kubenswrapper[4708]: E0227 18:01:43.731651 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-dhwkc" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" Feb 27 18:01:43 crc kubenswrapper[4708]: E0227 18:01:43.859680 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dhwkc" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" Feb 27 18:01:58 crc kubenswrapper[4708]: E0227 18:01:58.851272 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:01:58 crc kubenswrapper[4708]: E0227 18:01:58.852610 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-sfpmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-dhwkc_openshift-marketplace(28ad2711-e74a-40f9-8dc5-d2bd2f2ca236): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:01:58 crc kubenswrapper[4708]: E0227 18:01:58.853763 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-dhwkc" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.164428 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536922-7znds"] Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.166892 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-7znds" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.169408 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.170388 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.170563 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.177744 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-7znds"] Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.336313 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kh5d\" (UniqueName: \"kubernetes.io/projected/e1070acf-438a-4619-9a43-e06fbad54ada-kube-api-access-5kh5d\") pod \"auto-csr-approver-29536922-7znds\" (UID: \"e1070acf-438a-4619-9a43-e06fbad54ada\") " pod="openshift-infra/auto-csr-approver-29536922-7znds" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.438478 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5kh5d\" (UniqueName: \"kubernetes.io/projected/e1070acf-438a-4619-9a43-e06fbad54ada-kube-api-access-5kh5d\") pod \"auto-csr-approver-29536922-7znds\" (UID: \"e1070acf-438a-4619-9a43-e06fbad54ada\") " pod="openshift-infra/auto-csr-approver-29536922-7znds" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.456478 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5kh5d\" (UniqueName: \"kubernetes.io/projected/e1070acf-438a-4619-9a43-e06fbad54ada-kube-api-access-5kh5d\") pod \"auto-csr-approver-29536922-7znds\" (UID: \"e1070acf-438a-4619-9a43-e06fbad54ada\") " pod="openshift-infra/auto-csr-approver-29536922-7znds" Feb 27 18:02:00 crc kubenswrapper[4708]: I0227 18:02:00.492723 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-7znds" Feb 27 18:02:01 crc kubenswrapper[4708]: I0227 18:02:01.050472 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-7znds"] Feb 27 18:02:01 crc kubenswrapper[4708]: E0227 18:02:01.931712 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:02:01 crc kubenswrapper[4708]: E0227 18:02:01.931858 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:02:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:02:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-5kh5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536922-7znds_openshift-infra(e1070acf-438a-4619-9a43-e06fbad54ada): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:02:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:02:01 crc kubenswrapper[4708]: E0227 18:02:01.933395 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536922-7znds" podUID="e1070acf-438a-4619-9a43-e06fbad54ada" Feb 27 18:02:02 crc kubenswrapper[4708]: I0227 18:02:02.048687 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536922-7znds" event={"ID":"e1070acf-438a-4619-9a43-e06fbad54ada","Type":"ContainerStarted","Data":"1766fa397724116b5d1de492a91a3d51e46bb471835d895bdfc71035368c4b62"} Feb 27 18:02:02 crc kubenswrapper[4708]: E0227 18:02:02.051675 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536922-7znds" podUID="e1070acf-438a-4619-9a43-e06fbad54ada" Feb 27 18:02:03 crc kubenswrapper[4708]: E0227 18:02:03.059530 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536922-7znds" podUID="e1070acf-438a-4619-9a43-e06fbad54ada" Feb 27 18:02:13 crc kubenswrapper[4708]: E0227 18:02:13.231922 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-dhwkc" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" Feb 27 18:02:15 crc kubenswrapper[4708]: I0227 18:02:15.186031 4708 generic.go:334] "Generic (PLEG): container finished" podID="e1070acf-438a-4619-9a43-e06fbad54ada" containerID="44e39c4066c7ef199f64c3cc8d080c7d472bc3ad7a498dd54f0e2832054c7b86" exitCode=0 Feb 27 18:02:15 crc kubenswrapper[4708]: I0227 18:02:15.186157 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536922-7znds" event={"ID":"e1070acf-438a-4619-9a43-e06fbad54ada","Type":"ContainerDied","Data":"44e39c4066c7ef199f64c3cc8d080c7d472bc3ad7a498dd54f0e2832054c7b86"} Feb 27 18:02:16 crc kubenswrapper[4708]: I0227 18:02:16.711196 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-7znds" Feb 27 18:02:16 crc kubenswrapper[4708]: I0227 18:02:16.798939 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kh5d\" (UniqueName: \"kubernetes.io/projected/e1070acf-438a-4619-9a43-e06fbad54ada-kube-api-access-5kh5d\") pod \"e1070acf-438a-4619-9a43-e06fbad54ada\" (UID: \"e1070acf-438a-4619-9a43-e06fbad54ada\") " Feb 27 18:02:16 crc kubenswrapper[4708]: I0227 18:02:16.805569 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1070acf-438a-4619-9a43-e06fbad54ada-kube-api-access-5kh5d" (OuterVolumeSpecName: "kube-api-access-5kh5d") pod "e1070acf-438a-4619-9a43-e06fbad54ada" (UID: "e1070acf-438a-4619-9a43-e06fbad54ada"). InnerVolumeSpecName "kube-api-access-5kh5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:02:16 crc kubenswrapper[4708]: I0227 18:02:16.902511 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5kh5d\" (UniqueName: \"kubernetes.io/projected/e1070acf-438a-4619-9a43-e06fbad54ada-kube-api-access-5kh5d\") on node \"crc\" DevicePath \"\"" Feb 27 18:02:17 crc kubenswrapper[4708]: I0227 18:02:17.212090 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536922-7znds" event={"ID":"e1070acf-438a-4619-9a43-e06fbad54ada","Type":"ContainerDied","Data":"1766fa397724116b5d1de492a91a3d51e46bb471835d895bdfc71035368c4b62"} Feb 27 18:02:17 crc kubenswrapper[4708]: I0227 18:02:17.212157 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1766fa397724116b5d1de492a91a3d51e46bb471835d895bdfc71035368c4b62" Feb 27 18:02:17 crc kubenswrapper[4708]: I0227 18:02:17.212165 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536922-7znds" Feb 27 18:02:17 crc kubenswrapper[4708]: I0227 18:02:17.811151 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-jvlr9"] Feb 27 18:02:17 crc kubenswrapper[4708]: I0227 18:02:17.823257 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536916-jvlr9"] Feb 27 18:02:18 crc kubenswrapper[4708]: I0227 18:02:18.239389 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e09bd3f-9e27-4168-b9a6-4855dd0dbaac" path="/var/lib/kubelet/pods/8e09bd3f-9e27-4168-b9a6-4855dd0dbaac/volumes" Feb 27 18:02:28 crc kubenswrapper[4708]: I0227 18:02:28.331490 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhwkc" event={"ID":"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236","Type":"ContainerStarted","Data":"4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40"} Feb 27 18:02:33 crc kubenswrapper[4708]: I0227 18:02:33.389818 4708 generic.go:334] "Generic (PLEG): container finished" podID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerID="4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40" exitCode=0 Feb 27 18:02:33 crc kubenswrapper[4708]: I0227 18:02:33.389952 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhwkc" event={"ID":"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236","Type":"ContainerDied","Data":"4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40"} Feb 27 18:02:34 crc kubenswrapper[4708]: I0227 18:02:34.403011 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhwkc" event={"ID":"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236","Type":"ContainerStarted","Data":"a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806"} Feb 27 18:02:34 crc kubenswrapper[4708]: I0227 18:02:34.434208 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-dhwkc" podStartSLOduration=3.390486841 podStartE2EDuration="54.434171714s" podCreationTimestamp="2026-02-27 18:01:40 +0000 UTC" firstStartedPulling="2026-02-27 18:01:42.850601455 +0000 UTC m=+4101.366399042" lastFinishedPulling="2026-02-27 18:02:33.894286328 +0000 UTC m=+4152.410083915" observedRunningTime="2026-02-27 18:02:34.430291815 +0000 UTC m=+4152.946089402" watchObservedRunningTime="2026-02-27 18:02:34.434171714 +0000 UTC m=+4152.949969341" Feb 27 18:02:41 crc kubenswrapper[4708]: I0227 18:02:41.335505 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:02:41 crc kubenswrapper[4708]: I0227 18:02:41.336133 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:02:42 crc kubenswrapper[4708]: I0227 18:02:42.412566 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-dhwkc" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="registry-server" probeResult="failure" output=< Feb 27 18:02:42 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:02:42 crc kubenswrapper[4708]: > Feb 27 18:02:51 crc kubenswrapper[4708]: I0227 18:02:51.397830 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:02:51 crc kubenswrapper[4708]: I0227 18:02:51.455231 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:02:51 crc kubenswrapper[4708]: I0227 18:02:51.637535 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhwkc"] Feb 27 18:02:52 crc kubenswrapper[4708]: I0227 18:02:52.600252 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-dhwkc" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="registry-server" containerID="cri-o://a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806" gracePeriod=2 Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.153134 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.172387 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-utilities\") pod \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.172484 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-catalog-content\") pod \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.172518 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfpmf\" (UniqueName: \"kubernetes.io/projected/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-kube-api-access-sfpmf\") pod \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\" (UID: \"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236\") " Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.173262 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-utilities" (OuterVolumeSpecName: "utilities") pod "28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" (UID: "28ad2711-e74a-40f9-8dc5-d2bd2f2ca236"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.179654 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-kube-api-access-sfpmf" (OuterVolumeSpecName: "kube-api-access-sfpmf") pod "28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" (UID: "28ad2711-e74a-40f9-8dc5-d2bd2f2ca236"). InnerVolumeSpecName "kube-api-access-sfpmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.275255 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.275569 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfpmf\" (UniqueName: \"kubernetes.io/projected/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-kube-api-access-sfpmf\") on node \"crc\" DevicePath \"\"" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.294339 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" (UID: "28ad2711-e74a-40f9-8dc5-d2bd2f2ca236"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.377231 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.621585 4708 generic.go:334] "Generic (PLEG): container finished" podID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerID="a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806" exitCode=0 Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.621667 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-dhwkc" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.621675 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhwkc" event={"ID":"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236","Type":"ContainerDied","Data":"a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806"} Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.622274 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-dhwkc" event={"ID":"28ad2711-e74a-40f9-8dc5-d2bd2f2ca236","Type":"ContainerDied","Data":"0ea8262931287b70b2563cb81396b9a5584e188ea7ffcc25c8338195602c10c4"} Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.622319 4708 scope.go:117] "RemoveContainer" containerID="a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.667681 4708 scope.go:117] "RemoveContainer" containerID="4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.668194 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-dhwkc"] Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.677359 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-dhwkc"] Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.701391 4708 scope.go:117] "RemoveContainer" containerID="97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.741919 4708 scope.go:117] "RemoveContainer" containerID="a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806" Feb 27 18:02:53 crc kubenswrapper[4708]: E0227 18:02:53.743717 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806\": container with ID starting with a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806 not found: ID does not exist" containerID="a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.743776 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806"} err="failed to get container status \"a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806\": rpc error: code = NotFound desc = could not find container \"a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806\": container with ID starting with a0b200eaaa18f58712ef9fdbc12a2ed2fe03321f4625f7cab8a8060cedc1f806 not found: ID does not exist" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.743809 4708 scope.go:117] "RemoveContainer" containerID="4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40" Feb 27 18:02:53 crc kubenswrapper[4708]: E0227 18:02:53.744332 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40\": container with ID starting with 4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40 not found: ID does not exist" containerID="4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.744386 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40"} err="failed to get container status \"4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40\": rpc error: code = NotFound desc = could not find container \"4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40\": container with ID starting with 4180b75ba6b4c21bc8855469c7a1d552df8ad0ebd84c6c5571ae662344872c40 not found: ID does not exist" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.744414 4708 scope.go:117] "RemoveContainer" containerID="97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e" Feb 27 18:02:53 crc kubenswrapper[4708]: E0227 18:02:53.744714 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e\": container with ID starting with 97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e not found: ID does not exist" containerID="97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e" Feb 27 18:02:53 crc kubenswrapper[4708]: I0227 18:02:53.744747 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e"} err="failed to get container status \"97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e\": rpc error: code = NotFound desc = could not find container \"97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e\": container with ID starting with 97d1c67403f77fc8f7379393c5cea6410b00c418ae53bd72f1990d690890ca6e not found: ID does not exist" Feb 27 18:02:54 crc kubenswrapper[4708]: I0227 18:02:54.249839 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" path="/var/lib/kubelet/pods/28ad2711-e74a-40f9-8dc5-d2bd2f2ca236/volumes" Feb 27 18:03:03 crc kubenswrapper[4708]: I0227 18:03:03.584068 4708 scope.go:117] "RemoveContainer" containerID="4ffd1d36f2c8cb2eb20ccc3022e998a59c6d97b452fdec5bc945601b3705c4e2" Feb 27 18:03:05 crc kubenswrapper[4708]: I0227 18:03:05.631790 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:03:05 crc kubenswrapper[4708]: I0227 18:03:05.632706 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:03:35 crc kubenswrapper[4708]: I0227 18:03:35.631625 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:03:35 crc kubenswrapper[4708]: I0227 18:03:35.632224 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.151246 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536924-7s2wb"] Feb 27 18:04:00 crc kubenswrapper[4708]: E0227 18:04:00.152623 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="extract-content" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.152648 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="extract-content" Feb 27 18:04:00 crc kubenswrapper[4708]: E0227 18:04:00.152677 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="extract-utilities" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.152696 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="extract-utilities" Feb 27 18:04:00 crc kubenswrapper[4708]: E0227 18:04:00.152734 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e1070acf-438a-4619-9a43-e06fbad54ada" containerName="oc" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.152749 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1070acf-438a-4619-9a43-e06fbad54ada" containerName="oc" Feb 27 18:04:00 crc kubenswrapper[4708]: E0227 18:04:00.152788 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="registry-server" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.152801 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="registry-server" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.153230 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="28ad2711-e74a-40f9-8dc5-d2bd2f2ca236" containerName="registry-server" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.153267 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e1070acf-438a-4619-9a43-e06fbad54ada" containerName="oc" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.154553 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-7s2wb" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.162916 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-7s2wb"] Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.163403 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.163461 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.163495 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.205790 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfwgm\" (UniqueName: \"kubernetes.io/projected/74986739-6955-4b40-b3e5-6bde3a3c5695-kube-api-access-sfwgm\") pod \"auto-csr-approver-29536924-7s2wb\" (UID: \"74986739-6955-4b40-b3e5-6bde3a3c5695\") " pod="openshift-infra/auto-csr-approver-29536924-7s2wb" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.307323 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfwgm\" (UniqueName: \"kubernetes.io/projected/74986739-6955-4b40-b3e5-6bde3a3c5695-kube-api-access-sfwgm\") pod \"auto-csr-approver-29536924-7s2wb\" (UID: \"74986739-6955-4b40-b3e5-6bde3a3c5695\") " pod="openshift-infra/auto-csr-approver-29536924-7s2wb" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.325794 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfwgm\" (UniqueName: \"kubernetes.io/projected/74986739-6955-4b40-b3e5-6bde3a3c5695-kube-api-access-sfwgm\") pod \"auto-csr-approver-29536924-7s2wb\" (UID: \"74986739-6955-4b40-b3e5-6bde3a3c5695\") " pod="openshift-infra/auto-csr-approver-29536924-7s2wb" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.485417 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-7s2wb" Feb 27 18:04:00 crc kubenswrapper[4708]: I0227 18:04:00.986049 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-7s2wb"] Feb 27 18:04:01 crc kubenswrapper[4708]: I0227 18:04:01.366959 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536924-7s2wb" event={"ID":"74986739-6955-4b40-b3e5-6bde3a3c5695","Type":"ContainerStarted","Data":"2cb8a1458c10fd08bbe9a3b2934b43b6a5e7585622891e0907550e931ee6658b"} Feb 27 18:04:03 crc kubenswrapper[4708]: I0227 18:04:03.391120 4708 generic.go:334] "Generic (PLEG): container finished" podID="74986739-6955-4b40-b3e5-6bde3a3c5695" containerID="a389c2e469b522445449069ce38b172556a14ba36ab1145134ea84ec2f032890" exitCode=0 Feb 27 18:04:03 crc kubenswrapper[4708]: I0227 18:04:03.391297 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536924-7s2wb" event={"ID":"74986739-6955-4b40-b3e5-6bde3a3c5695","Type":"ContainerDied","Data":"a389c2e469b522445449069ce38b172556a14ba36ab1145134ea84ec2f032890"} Feb 27 18:04:04 crc kubenswrapper[4708]: I0227 18:04:04.947928 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-7s2wb" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.014119 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfwgm\" (UniqueName: \"kubernetes.io/projected/74986739-6955-4b40-b3e5-6bde3a3c5695-kube-api-access-sfwgm\") pod \"74986739-6955-4b40-b3e5-6bde3a3c5695\" (UID: \"74986739-6955-4b40-b3e5-6bde3a3c5695\") " Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.047190 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/74986739-6955-4b40-b3e5-6bde3a3c5695-kube-api-access-sfwgm" (OuterVolumeSpecName: "kube-api-access-sfwgm") pod "74986739-6955-4b40-b3e5-6bde3a3c5695" (UID: "74986739-6955-4b40-b3e5-6bde3a3c5695"). InnerVolumeSpecName "kube-api-access-sfwgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.118097 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfwgm\" (UniqueName: \"kubernetes.io/projected/74986739-6955-4b40-b3e5-6bde3a3c5695-kube-api-access-sfwgm\") on node \"crc\" DevicePath \"\"" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.414698 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536924-7s2wb" event={"ID":"74986739-6955-4b40-b3e5-6bde3a3c5695","Type":"ContainerDied","Data":"2cb8a1458c10fd08bbe9a3b2934b43b6a5e7585622891e0907550e931ee6658b"} Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.415087 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2cb8a1458c10fd08bbe9a3b2934b43b6a5e7585622891e0907550e931ee6658b" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.414775 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536924-7s2wb" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.631671 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.632052 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.632287 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.633308 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2032b2d772007daf5716b35cf3a58fa4ff30ba2a5de11ac4ec08c8034d52d619"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:04:05 crc kubenswrapper[4708]: I0227 18:04:05.633819 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://2032b2d772007daf5716b35cf3a58fa4ff30ba2a5de11ac4ec08c8034d52d619" gracePeriod=600 Feb 27 18:04:06 crc kubenswrapper[4708]: I0227 18:04:06.048389 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-qhzmk"] Feb 27 18:04:06 crc kubenswrapper[4708]: I0227 18:04:06.063643 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536918-qhzmk"] Feb 27 18:04:06 crc kubenswrapper[4708]: I0227 18:04:06.247834 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13cfe6fb-32f4-4bb0-86b3-4096b99fd489" path="/var/lib/kubelet/pods/13cfe6fb-32f4-4bb0-86b3-4096b99fd489/volumes" Feb 27 18:04:06 crc kubenswrapper[4708]: I0227 18:04:06.426496 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="2032b2d772007daf5716b35cf3a58fa4ff30ba2a5de11ac4ec08c8034d52d619" exitCode=0 Feb 27 18:04:06 crc kubenswrapper[4708]: I0227 18:04:06.426552 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"2032b2d772007daf5716b35cf3a58fa4ff30ba2a5de11ac4ec08c8034d52d619"} Feb 27 18:04:06 crc kubenswrapper[4708]: I0227 18:04:06.426617 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05"} Feb 27 18:04:06 crc kubenswrapper[4708]: I0227 18:04:06.426636 4708 scope.go:117] "RemoveContainer" containerID="61d57585c2c334c5091714fb5ae5cb532ba31c33ef2edd6ee49ee3da21524ad4" Feb 27 18:05:03 crc kubenswrapper[4708]: I0227 18:05:03.704781 4708 scope.go:117] "RemoveContainer" containerID="640720e0093d79ef1b87156536370f5f815e50ef5f648ba7d5ee674bace00d2a" Feb 27 18:05:21 crc kubenswrapper[4708]: E0227 18:05:21.556582 4708 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.182:48452->38.102.83.182:43573: write tcp 38.102.83.182:48452->38.102.83.182:43573: write: broken pipe Feb 27 18:05:57 crc kubenswrapper[4708]: E0227 18:05:57.207977 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.157309 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536926-27vc5"] Feb 27 18:06:00 crc kubenswrapper[4708]: E0227 18:06:00.158519 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="74986739-6955-4b40-b3e5-6bde3a3c5695" containerName="oc" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.158537 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="74986739-6955-4b40-b3e5-6bde3a3c5695" containerName="oc" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.158894 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="74986739-6955-4b40-b3e5-6bde3a3c5695" containerName="oc" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.162204 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536926-27vc5" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.165695 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.165701 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.166019 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.170249 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536926-27vc5"] Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.302358 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l59vl\" (UniqueName: \"kubernetes.io/projected/4169fe13-35f1-4450-b318-9b29670cdf2d-kube-api-access-l59vl\") pod \"auto-csr-approver-29536926-27vc5\" (UID: \"4169fe13-35f1-4450-b318-9b29670cdf2d\") " pod="openshift-infra/auto-csr-approver-29536926-27vc5" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.406784 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l59vl\" (UniqueName: \"kubernetes.io/projected/4169fe13-35f1-4450-b318-9b29670cdf2d-kube-api-access-l59vl\") pod \"auto-csr-approver-29536926-27vc5\" (UID: \"4169fe13-35f1-4450-b318-9b29670cdf2d\") " pod="openshift-infra/auto-csr-approver-29536926-27vc5" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.431154 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l59vl\" (UniqueName: \"kubernetes.io/projected/4169fe13-35f1-4450-b318-9b29670cdf2d-kube-api-access-l59vl\") pod \"auto-csr-approver-29536926-27vc5\" (UID: \"4169fe13-35f1-4450-b318-9b29670cdf2d\") " pod="openshift-infra/auto-csr-approver-29536926-27vc5" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.485441 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536926-27vc5" Feb 27 18:06:00 crc kubenswrapper[4708]: I0227 18:06:00.974906 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536926-27vc5"] Feb 27 18:06:01 crc kubenswrapper[4708]: I0227 18:06:01.681685 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536926-27vc5" event={"ID":"4169fe13-35f1-4450-b318-9b29670cdf2d","Type":"ContainerStarted","Data":"9da5182d5aab0c08a54ceee82c435ab8733c0b634f326d0681c6395693522a7c"} Feb 27 18:06:01 crc kubenswrapper[4708]: E0227 18:06:01.904653 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:06:01 crc kubenswrapper[4708]: E0227 18:06:01.904783 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:06:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:06:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:06:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:06:01 crc kubenswrapper[4708]: E0227 18:06:01.906839 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:06:02 crc kubenswrapper[4708]: E0227 18:06:02.693152 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:06:18 crc kubenswrapper[4708]: E0227 18:06:18.170546 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:06:18 crc kubenswrapper[4708]: E0227 18:06:18.171112 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:06:18 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:06:18 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:06:18 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:06:18 crc kubenswrapper[4708]: E0227 18:06:18.172369 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:06:33 crc kubenswrapper[4708]: E0227 18:06:33.232210 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:06:35 crc kubenswrapper[4708]: I0227 18:06:35.631630 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:06:35 crc kubenswrapper[4708]: I0227 18:06:35.632062 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:06:47 crc kubenswrapper[4708]: I0227 18:06:47.231193 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:06:48 crc kubenswrapper[4708]: E0227 18:06:48.987149 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:06:48 crc kubenswrapper[4708]: E0227 18:06:48.987803 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:06:48 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:06:48 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:06:48 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:06:48 crc kubenswrapper[4708]: E0227 18:06:48.989206 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:07:03 crc kubenswrapper[4708]: E0227 18:07:03.230342 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:07:05 crc kubenswrapper[4708]: I0227 18:07:05.632059 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:07:05 crc kubenswrapper[4708]: I0227 18:07:05.632495 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:07:16 crc kubenswrapper[4708]: E0227 18:07:16.232799 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:07:28 crc kubenswrapper[4708]: E0227 18:07:28.232819 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.632220 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.632932 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.632985 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.633676 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.633745 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" gracePeriod=600 Feb 27 18:07:35 crc kubenswrapper[4708]: E0227 18:07:35.761538 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.786806 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" exitCode=0 Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.786907 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05"} Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.787273 4708 scope.go:117] "RemoveContainer" containerID="2032b2d772007daf5716b35cf3a58fa4ff30ba2a5de11ac4ec08c8034d52d619" Feb 27 18:07:35 crc kubenswrapper[4708]: I0227 18:07:35.788139 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:07:35 crc kubenswrapper[4708]: E0227 18:07:35.788529 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.471560 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ckftw"] Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.475633 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.494816 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckftw"] Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.662761 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-utilities\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.663133 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlgf7\" (UniqueName: \"kubernetes.io/projected/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-kube-api-access-rlgf7\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.663309 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-catalog-content\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.765133 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-utilities\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.765220 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rlgf7\" (UniqueName: \"kubernetes.io/projected/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-kube-api-access-rlgf7\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.765272 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-catalog-content\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.765696 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-catalog-content\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.766467 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-utilities\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.789490 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rlgf7\" (UniqueName: \"kubernetes.io/projected/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-kube-api-access-rlgf7\") pod \"certified-operators-ckftw\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:37 crc kubenswrapper[4708]: I0227 18:07:37.819207 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:07:38 crc kubenswrapper[4708]: I0227 18:07:38.264411 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ckftw"] Feb 27 18:07:38 crc kubenswrapper[4708]: I0227 18:07:38.833087 4708 generic.go:334] "Generic (PLEG): container finished" podID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerID="74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec" exitCode=0 Feb 27 18:07:38 crc kubenswrapper[4708]: I0227 18:07:38.833129 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckftw" event={"ID":"c4f2fb46-c9f9-4359-8b5e-f6f68499311f","Type":"ContainerDied","Data":"74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec"} Feb 27 18:07:38 crc kubenswrapper[4708]: I0227 18:07:38.833443 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckftw" event={"ID":"c4f2fb46-c9f9-4359-8b5e-f6f68499311f","Type":"ContainerStarted","Data":"0b33cfb10c5f8331741be434adb20d2161cfeaca63879bae862c5bf920346430"} Feb 27 18:07:39 crc kubenswrapper[4708]: E0227 18:07:39.471961 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:07:39 crc kubenswrapper[4708]: E0227 18:07:39.472702 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlgf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ckftw_openshift-marketplace(c4f2fb46-c9f9-4359-8b5e-f6f68499311f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:39 crc kubenswrapper[4708]: E0227 18:07:39.474080 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-ckftw" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" Feb 27 18:07:39 crc kubenswrapper[4708]: E0227 18:07:39.849712 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ckftw" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" Feb 27 18:07:41 crc kubenswrapper[4708]: I0227 18:07:41.649497 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fbgk9"] Feb 27 18:07:41 crc kubenswrapper[4708]: I0227 18:07:41.653621 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:41 crc kubenswrapper[4708]: I0227 18:07:41.685943 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbgk9"] Feb 27 18:07:42 crc kubenswrapper[4708]: E0227 18:07:42.125205 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:07:42 crc kubenswrapper[4708]: E0227 18:07:42.125527 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:07:42 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:07:42 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:07:42 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:07:42 crc kubenswrapper[4708]: E0227 18:07:42.126884 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.324100 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" podUID="dde28522-3138-4c50-b3c5-1e26d61b96e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.609296 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-catalog-content\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.609351 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ll7m\" (UniqueName: \"kubernetes.io/projected/f9c4090f-cad8-4027-99dc-512d4a41e1bc-kube-api-access-9ll7m\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.609373 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-utilities\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.633459 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack-operators/infra-operator-controller-manager-f7fcc58b9-sxmk5" podUID="dde28522-3138-4c50-b3c5-1e26d61b96e1" containerName="manager" probeResult="failure" output="Get \"http://10.217.0.88:8081/readyz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.672521 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6n2qm"] Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.685046 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.689152 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6n2qm"] Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.712877 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-catalog-content\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.712920 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ll7m\" (UniqueName: \"kubernetes.io/projected/f9c4090f-cad8-4027-99dc-512d4a41e1bc-kube-api-access-9ll7m\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.712945 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-utilities\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.713431 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-catalog-content\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.714881 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-utilities\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.739278 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ll7m\" (UniqueName: \"kubernetes.io/projected/f9c4090f-cad8-4027-99dc-512d4a41e1bc-kube-api-access-9ll7m\") pod \"community-operators-fbgk9\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.815067 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-utilities\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.815122 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7gzc\" (UniqueName: \"kubernetes.io/projected/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-kube-api-access-j7gzc\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.815157 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-catalog-content\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.886520 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.917197 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-utilities\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.917240 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7gzc\" (UniqueName: \"kubernetes.io/projected/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-kube-api-access-j7gzc\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.917263 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-catalog-content\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.917673 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-utilities\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.917826 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-catalog-content\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:42 crc kubenswrapper[4708]: I0227 18:07:42.932245 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7gzc\" (UniqueName: \"kubernetes.io/projected/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-kube-api-access-j7gzc\") pod \"redhat-marketplace-6n2qm\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:43 crc kubenswrapper[4708]: I0227 18:07:43.016418 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:07:43 crc kubenswrapper[4708]: I0227 18:07:43.400103 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fbgk9"] Feb 27 18:07:43 crc kubenswrapper[4708]: I0227 18:07:43.551145 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6n2qm"] Feb 27 18:07:43 crc kubenswrapper[4708]: W0227 18:07:43.636608 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a2c3a08_cc0b_48a3_a0c8_fdde8c0e2cd7.slice/crio-da5d211df0e13ffdd1750a17dbd82367a3e3973a5b70ccc3d3731f6f30db6294 WatchSource:0}: Error finding container da5d211df0e13ffdd1750a17dbd82367a3e3973a5b70ccc3d3731f6f30db6294: Status 404 returned error can't find the container with id da5d211df0e13ffdd1750a17dbd82367a3e3973a5b70ccc3d3731f6f30db6294 Feb 27 18:07:43 crc kubenswrapper[4708]: I0227 18:07:43.680324 4708 generic.go:334] "Generic (PLEG): container finished" podID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerID="29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f" exitCode=0 Feb 27 18:07:43 crc kubenswrapper[4708]: I0227 18:07:43.680435 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbgk9" event={"ID":"f9c4090f-cad8-4027-99dc-512d4a41e1bc","Type":"ContainerDied","Data":"29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f"} Feb 27 18:07:43 crc kubenswrapper[4708]: I0227 18:07:43.680551 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbgk9" event={"ID":"f9c4090f-cad8-4027-99dc-512d4a41e1bc","Type":"ContainerStarted","Data":"521ddecb1df56491dd1fadb08328a87849118ec6f3468a84ddd52bf815c3c61b"} Feb 27 18:07:43 crc kubenswrapper[4708]: I0227 18:07:43.684054 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6n2qm" event={"ID":"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7","Type":"ContainerStarted","Data":"da5d211df0e13ffdd1750a17dbd82367a3e3973a5b70ccc3d3731f6f30db6294"} Feb 27 18:07:44 crc kubenswrapper[4708]: E0227 18:07:44.419152 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:07:44 crc kubenswrapper[4708]: E0227 18:07:44.419537 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ll7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fbgk9_openshift-marketplace(f9c4090f-cad8-4027-99dc-512d4a41e1bc): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:44 crc kubenswrapper[4708]: E0227 18:07:44.420735 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-fbgk9" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" Feb 27 18:07:44 crc kubenswrapper[4708]: I0227 18:07:44.704384 4708 generic.go:334] "Generic (PLEG): container finished" podID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerID="9dc28a406132f190f86ffc1e1fdff2988972b9b2332deba38690f2367cc0b334" exitCode=0 Feb 27 18:07:44 crc kubenswrapper[4708]: I0227 18:07:44.704756 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6n2qm" event={"ID":"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7","Type":"ContainerDied","Data":"9dc28a406132f190f86ffc1e1fdff2988972b9b2332deba38690f2367cc0b334"} Feb 27 18:07:44 crc kubenswrapper[4708]: E0227 18:07:44.717745 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fbgk9" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" Feb 27 18:07:45 crc kubenswrapper[4708]: E0227 18:07:45.389859 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:07:45 crc kubenswrapper[4708]: E0227 18:07:45.389982 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7gzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6n2qm_openshift-marketplace(6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:45 crc kubenswrapper[4708]: E0227 18:07:45.391123 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-6n2qm" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" Feb 27 18:07:45 crc kubenswrapper[4708]: E0227 18:07:45.713538 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6n2qm" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" Feb 27 18:07:48 crc kubenswrapper[4708]: I0227 18:07:48.229346 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:07:48 crc kubenswrapper[4708]: E0227 18:07:48.230743 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:07:51 crc kubenswrapper[4708]: E0227 18:07:51.154074 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 18:07:51 crc kubenswrapper[4708]: E0227 18:07:51.154547 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rlgf7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-ckftw_openshift-marketplace(c4f2fb46-c9f9-4359-8b5e-f6f68499311f): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:51 crc kubenswrapper[4708]: E0227 18:07:51.156110 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-ckftw" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" Feb 27 18:07:57 crc kubenswrapper[4708]: E0227 18:07:57.230767 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:07:58 crc kubenswrapper[4708]: E0227 18:07:58.183777 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:07:58 crc kubenswrapper[4708]: E0227 18:07:58.184275 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j7gzc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-6n2qm_openshift-marketplace(6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:58 crc kubenswrapper[4708]: E0227 18:07:58.185667 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-6n2qm" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" Feb 27 18:07:58 crc kubenswrapper[4708]: E0227 18:07:58.918061 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:07:58 crc kubenswrapper[4708]: E0227 18:07:58.918274 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9ll7m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-fbgk9_openshift-marketplace(f9c4090f-cad8-4027-99dc-512d4a41e1bc): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:07:58 crc kubenswrapper[4708]: E0227 18:07:58.919543 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-fbgk9" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" Feb 27 18:08:00 crc kubenswrapper[4708]: I0227 18:08:00.169173 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536928-k2dpc"] Feb 27 18:08:00 crc kubenswrapper[4708]: I0227 18:08:00.171858 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" Feb 27 18:08:00 crc kubenswrapper[4708]: I0227 18:08:00.192059 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-k2dpc"] Feb 27 18:08:00 crc kubenswrapper[4708]: I0227 18:08:00.314526 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8pv\" (UniqueName: \"kubernetes.io/projected/7be693cf-322d-4ac9-b66c-35a281510ef4-kube-api-access-tb8pv\") pod \"auto-csr-approver-29536928-k2dpc\" (UID: \"7be693cf-322d-4ac9-b66c-35a281510ef4\") " pod="openshift-infra/auto-csr-approver-29536928-k2dpc" Feb 27 18:08:00 crc kubenswrapper[4708]: I0227 18:08:00.417858 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tb8pv\" (UniqueName: \"kubernetes.io/projected/7be693cf-322d-4ac9-b66c-35a281510ef4-kube-api-access-tb8pv\") pod \"auto-csr-approver-29536928-k2dpc\" (UID: \"7be693cf-322d-4ac9-b66c-35a281510ef4\") " pod="openshift-infra/auto-csr-approver-29536928-k2dpc" Feb 27 18:08:00 crc kubenswrapper[4708]: I0227 18:08:00.453030 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tb8pv\" (UniqueName: \"kubernetes.io/projected/7be693cf-322d-4ac9-b66c-35a281510ef4-kube-api-access-tb8pv\") pod \"auto-csr-approver-29536928-k2dpc\" (UID: \"7be693cf-322d-4ac9-b66c-35a281510ef4\") " pod="openshift-infra/auto-csr-approver-29536928-k2dpc" Feb 27 18:08:00 crc kubenswrapper[4708]: I0227 18:08:00.500642 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" Feb 27 18:08:01 crc kubenswrapper[4708]: I0227 18:08:01.228977 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:08:01 crc kubenswrapper[4708]: E0227 18:08:01.229817 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:08:01 crc kubenswrapper[4708]: I0227 18:08:01.933715 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-k2dpc"] Feb 27 18:08:02 crc kubenswrapper[4708]: I0227 18:08:02.910655 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" event={"ID":"7be693cf-322d-4ac9-b66c-35a281510ef4","Type":"ContainerStarted","Data":"f170b41141676aa2cfd601797bece4bc18a7259afb9806d614ad9ef5fb551ade"} Feb 27 18:08:04 crc kubenswrapper[4708]: E0227 18:08:04.232119 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-ckftw" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" Feb 27 18:08:09 crc kubenswrapper[4708]: E0227 18:08:09.233000 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:08:12 crc kubenswrapper[4708]: E0227 18:08:12.245544 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-6n2qm" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" Feb 27 18:08:12 crc kubenswrapper[4708]: E0227 18:08:12.245912 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-fbgk9" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" Feb 27 18:08:16 crc kubenswrapper[4708]: I0227 18:08:16.229346 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:08:16 crc kubenswrapper[4708]: E0227 18:08:16.231297 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:08:20 crc kubenswrapper[4708]: I0227 18:08:20.119133 4708 generic.go:334] "Generic (PLEG): container finished" podID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerID="fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b" exitCode=0 Feb 27 18:08:20 crc kubenswrapper[4708]: I0227 18:08:20.119226 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckftw" event={"ID":"c4f2fb46-c9f9-4359-8b5e-f6f68499311f","Type":"ContainerDied","Data":"fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b"} Feb 27 18:08:21 crc kubenswrapper[4708]: I0227 18:08:21.137560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckftw" event={"ID":"c4f2fb46-c9f9-4359-8b5e-f6f68499311f","Type":"ContainerStarted","Data":"d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582"} Feb 27 18:08:21 crc kubenswrapper[4708]: I0227 18:08:21.182385 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ckftw" podStartSLOduration=2.4483196019999998 podStartE2EDuration="44.182352467s" podCreationTimestamp="2026-02-27 18:07:37 +0000 UTC" firstStartedPulling="2026-02-27 18:07:38.834975219 +0000 UTC m=+4457.350772806" lastFinishedPulling="2026-02-27 18:08:20.569008074 +0000 UTC m=+4499.084805671" observedRunningTime="2026-02-27 18:08:21.16539212 +0000 UTC m=+4499.681189757" watchObservedRunningTime="2026-02-27 18:08:21.182352467 +0000 UTC m=+4499.698150094" Feb 27 18:08:24 crc kubenswrapper[4708]: E0227 18:08:24.232823 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:08:25 crc kubenswrapper[4708]: I0227 18:08:25.213281 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6n2qm" event={"ID":"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7","Type":"ContainerStarted","Data":"fca67efbf9aefb05764f6a516f364282dcfef6c21769e8645b9a57f1b476e7ee"} Feb 27 18:08:27 crc kubenswrapper[4708]: I0227 18:08:27.228283 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:08:27 crc kubenswrapper[4708]: E0227 18:08:27.229078 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:08:27 crc kubenswrapper[4708]: I0227 18:08:27.240444 4708 generic.go:334] "Generic (PLEG): container finished" podID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerID="fca67efbf9aefb05764f6a516f364282dcfef6c21769e8645b9a57f1b476e7ee" exitCode=0 Feb 27 18:08:27 crc kubenswrapper[4708]: I0227 18:08:27.240508 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6n2qm" event={"ID":"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7","Type":"ContainerDied","Data":"fca67efbf9aefb05764f6a516f364282dcfef6c21769e8645b9a57f1b476e7ee"} Feb 27 18:08:27 crc kubenswrapper[4708]: I0227 18:08:27.819421 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:08:27 crc kubenswrapper[4708]: I0227 18:08:27.819810 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:08:27 crc kubenswrapper[4708]: I0227 18:08:27.901109 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:08:28 crc kubenswrapper[4708]: I0227 18:08:28.259707 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbgk9" event={"ID":"f9c4090f-cad8-4027-99dc-512d4a41e1bc","Type":"ContainerStarted","Data":"8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39"} Feb 27 18:08:28 crc kubenswrapper[4708]: I0227 18:08:28.264675 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6n2qm" event={"ID":"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7","Type":"ContainerStarted","Data":"9e0095869b6366d1aea1cbdcca12b231631d4df7e8909b7b9246b55a1b4456c5"} Feb 27 18:08:28 crc kubenswrapper[4708]: I0227 18:08:28.311923 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6n2qm" podStartSLOduration=3.367792293 podStartE2EDuration="46.311895206s" podCreationTimestamp="2026-02-27 18:07:42 +0000 UTC" firstStartedPulling="2026-02-27 18:07:44.711931255 +0000 UTC m=+4463.227728842" lastFinishedPulling="2026-02-27 18:08:27.656034128 +0000 UTC m=+4506.171831755" observedRunningTime="2026-02-27 18:08:28.30170779 +0000 UTC m=+4506.817505387" watchObservedRunningTime="2026-02-27 18:08:28.311895206 +0000 UTC m=+4506.827692833" Feb 27 18:08:28 crc kubenswrapper[4708]: I0227 18:08:28.332857 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:08:29 crc kubenswrapper[4708]: I0227 18:08:29.274902 4708 generic.go:334] "Generic (PLEG): container finished" podID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerID="8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39" exitCode=0 Feb 27 18:08:29 crc kubenswrapper[4708]: I0227 18:08:29.275137 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbgk9" event={"ID":"f9c4090f-cad8-4027-99dc-512d4a41e1bc","Type":"ContainerDied","Data":"8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39"} Feb 27 18:08:30 crc kubenswrapper[4708]: E0227 18:08:30.257607 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:08:30 crc kubenswrapper[4708]: E0227 18:08:30.258049 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:08:30 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:08:30 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:08:30 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:08:30 crc kubenswrapper[4708]: E0227 18:08:30.259331 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:08:30 crc kubenswrapper[4708]: I0227 18:08:30.290171 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbgk9" event={"ID":"f9c4090f-cad8-4027-99dc-512d4a41e1bc","Type":"ContainerStarted","Data":"730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec"} Feb 27 18:08:30 crc kubenswrapper[4708]: E0227 18:08:30.292266 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:08:30 crc kubenswrapper[4708]: I0227 18:08:30.335046 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fbgk9" podStartSLOduration=3.336724827 podStartE2EDuration="49.335029831s" podCreationTimestamp="2026-02-27 18:07:41 +0000 UTC" firstStartedPulling="2026-02-27 18:07:43.682833375 +0000 UTC m=+4462.198630972" lastFinishedPulling="2026-02-27 18:08:29.681138379 +0000 UTC m=+4508.196935976" observedRunningTime="2026-02-27 18:08:30.330512744 +0000 UTC m=+4508.846310331" watchObservedRunningTime="2026-02-27 18:08:30.335029831 +0000 UTC m=+4508.850827408" Feb 27 18:08:30 crc kubenswrapper[4708]: I0227 18:08:30.696909 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckftw"] Feb 27 18:08:31 crc kubenswrapper[4708]: I0227 18:08:31.299122 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-ckftw" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="registry-server" containerID="cri-o://d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582" gracePeriod=2 Feb 27 18:08:31 crc kubenswrapper[4708]: E0227 18:08:31.634672 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc4f2fb46_c9f9_4359_8b5e_f6f68499311f.slice/crio-conmon-d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582.scope\": RecentStats: unable to find data in memory cache]" Feb 27 18:08:31 crc kubenswrapper[4708]: I0227 18:08:31.960357 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.111030 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-catalog-content\") pod \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.111210 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-utilities\") pod \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.111296 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rlgf7\" (UniqueName: \"kubernetes.io/projected/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-kube-api-access-rlgf7\") pod \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\" (UID: \"c4f2fb46-c9f9-4359-8b5e-f6f68499311f\") " Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.111693 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-utilities" (OuterVolumeSpecName: "utilities") pod "c4f2fb46-c9f9-4359-8b5e-f6f68499311f" (UID: "c4f2fb46-c9f9-4359-8b5e-f6f68499311f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.111872 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.117599 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-kube-api-access-rlgf7" (OuterVolumeSpecName: "kube-api-access-rlgf7") pod "c4f2fb46-c9f9-4359-8b5e-f6f68499311f" (UID: "c4f2fb46-c9f9-4359-8b5e-f6f68499311f"). InnerVolumeSpecName "kube-api-access-rlgf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.177670 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c4f2fb46-c9f9-4359-8b5e-f6f68499311f" (UID: "c4f2fb46-c9f9-4359-8b5e-f6f68499311f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.213762 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rlgf7\" (UniqueName: \"kubernetes.io/projected/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-kube-api-access-rlgf7\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.213790 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c4f2fb46-c9f9-4359-8b5e-f6f68499311f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.315377 4708 generic.go:334] "Generic (PLEG): container finished" podID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerID="d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582" exitCode=0 Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.315419 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckftw" event={"ID":"c4f2fb46-c9f9-4359-8b5e-f6f68499311f","Type":"ContainerDied","Data":"d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582"} Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.315448 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ckftw" event={"ID":"c4f2fb46-c9f9-4359-8b5e-f6f68499311f","Type":"ContainerDied","Data":"0b33cfb10c5f8331741be434adb20d2161cfeaca63879bae862c5bf920346430"} Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.315466 4708 scope.go:117] "RemoveContainer" containerID="d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.315491 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ckftw" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.343367 4708 scope.go:117] "RemoveContainer" containerID="fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.343778 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-ckftw"] Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.355396 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-ckftw"] Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.369120 4708 scope.go:117] "RemoveContainer" containerID="74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.410999 4708 scope.go:117] "RemoveContainer" containerID="d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582" Feb 27 18:08:32 crc kubenswrapper[4708]: E0227 18:08:32.411518 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582\": container with ID starting with d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582 not found: ID does not exist" containerID="d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.411569 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582"} err="failed to get container status \"d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582\": rpc error: code = NotFound desc = could not find container \"d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582\": container with ID starting with d61f299fc22fd1bc00d75ddacf36b68c039e3372622a18a2b3a43733b551f582 not found: ID does not exist" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.411605 4708 scope.go:117] "RemoveContainer" containerID="fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b" Feb 27 18:08:32 crc kubenswrapper[4708]: E0227 18:08:32.412269 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b\": container with ID starting with fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b not found: ID does not exist" containerID="fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.412339 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b"} err="failed to get container status \"fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b\": rpc error: code = NotFound desc = could not find container \"fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b\": container with ID starting with fe9f55d82b62a8c21c637befcee2748cb003e378b0328f4943430b73b1d42b1b not found: ID does not exist" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.412372 4708 scope.go:117] "RemoveContainer" containerID="74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec" Feb 27 18:08:32 crc kubenswrapper[4708]: E0227 18:08:32.412658 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec\": container with ID starting with 74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec not found: ID does not exist" containerID="74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.412690 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec"} err="failed to get container status \"74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec\": rpc error: code = NotFound desc = could not find container \"74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec\": container with ID starting with 74c5d3be81fa89655e4c1b8adf1dad0957ccd45189012a0ac8acd2941cdd93ec not found: ID does not exist" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.887483 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:08:32 crc kubenswrapper[4708]: I0227 18:08:32.887543 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:08:33 crc kubenswrapper[4708]: I0227 18:08:33.017143 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:08:33 crc kubenswrapper[4708]: I0227 18:08:33.017642 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:08:33 crc kubenswrapper[4708]: I0227 18:08:33.949923 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fbgk9" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="registry-server" probeResult="failure" output=< Feb 27 18:08:33 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:08:33 crc kubenswrapper[4708]: > Feb 27 18:08:34 crc kubenswrapper[4708]: I0227 18:08:34.084034 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-6n2qm" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="registry-server" probeResult="failure" output=< Feb 27 18:08:34 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:08:34 crc kubenswrapper[4708]: > Feb 27 18:08:34 crc kubenswrapper[4708]: I0227 18:08:34.248940 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" path="/var/lib/kubelet/pods/c4f2fb46-c9f9-4359-8b5e-f6f68499311f/volumes" Feb 27 18:08:36 crc kubenswrapper[4708]: E0227 18:08:36.231265 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:08:41 crc kubenswrapper[4708]: I0227 18:08:41.235367 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:08:41 crc kubenswrapper[4708]: E0227 18:08:41.237971 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:08:42 crc kubenswrapper[4708]: I0227 18:08:42.958041 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:08:43 crc kubenswrapper[4708]: I0227 18:08:43.042491 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:08:43 crc kubenswrapper[4708]: I0227 18:08:43.085885 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:08:43 crc kubenswrapper[4708]: I0227 18:08:43.170744 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:08:45 crc kubenswrapper[4708]: I0227 18:08:45.499012 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fbgk9"] Feb 27 18:08:45 crc kubenswrapper[4708]: I0227 18:08:45.499882 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fbgk9" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="registry-server" containerID="cri-o://730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec" gracePeriod=2 Feb 27 18:08:46 crc kubenswrapper[4708]: E0227 18:08:46.120576 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:08:46 crc kubenswrapper[4708]: E0227 18:08:46.120991 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:08:46 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:08:46 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:08:46 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:08:46 crc kubenswrapper[4708]: E0227 18:08:46.122966 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.130207 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.220226 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ll7m\" (UniqueName: \"kubernetes.io/projected/f9c4090f-cad8-4027-99dc-512d4a41e1bc-kube-api-access-9ll7m\") pod \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.220528 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-utilities\") pod \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.220599 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-catalog-content\") pod \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\" (UID: \"f9c4090f-cad8-4027-99dc-512d4a41e1bc\") " Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.221757 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-utilities" (OuterVolumeSpecName: "utilities") pod "f9c4090f-cad8-4027-99dc-512d4a41e1bc" (UID: "f9c4090f-cad8-4027-99dc-512d4a41e1bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.230095 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c4090f-cad8-4027-99dc-512d4a41e1bc-kube-api-access-9ll7m" (OuterVolumeSpecName: "kube-api-access-9ll7m") pod "f9c4090f-cad8-4027-99dc-512d4a41e1bc" (UID: "f9c4090f-cad8-4027-99dc-512d4a41e1bc"). InnerVolumeSpecName "kube-api-access-9ll7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.290683 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f9c4090f-cad8-4027-99dc-512d4a41e1bc" (UID: "f9c4090f-cad8-4027-99dc-512d4a41e1bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.325108 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.325142 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9c4090f-cad8-4027-99dc-512d4a41e1bc-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.325151 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ll7m\" (UniqueName: \"kubernetes.io/projected/f9c4090f-cad8-4027-99dc-512d4a41e1bc-kube-api-access-9ll7m\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.499440 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6n2qm"] Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.500158 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6n2qm" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="registry-server" containerID="cri-o://9e0095869b6366d1aea1cbdcca12b231631d4df7e8909b7b9246b55a1b4456c5" gracePeriod=2 Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.589684 4708 generic.go:334] "Generic (PLEG): container finished" podID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerID="730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec" exitCode=0 Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.589745 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbgk9" event={"ID":"f9c4090f-cad8-4027-99dc-512d4a41e1bc","Type":"ContainerDied","Data":"730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec"} Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.589787 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fbgk9" event={"ID":"f9c4090f-cad8-4027-99dc-512d4a41e1bc","Type":"ContainerDied","Data":"521ddecb1df56491dd1fadb08328a87849118ec6f3468a84ddd52bf815c3c61b"} Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.589817 4708 scope.go:117] "RemoveContainer" containerID="730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.590317 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fbgk9" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.718980 4708 scope.go:117] "RemoveContainer" containerID="8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.718997 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fbgk9"] Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.729578 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fbgk9"] Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.766590 4708 scope.go:117] "RemoveContainer" containerID="29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.804991 4708 scope.go:117] "RemoveContainer" containerID="730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec" Feb 27 18:08:46 crc kubenswrapper[4708]: E0227 18:08:46.805459 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec\": container with ID starting with 730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec not found: ID does not exist" containerID="730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.805528 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec"} err="failed to get container status \"730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec\": rpc error: code = NotFound desc = could not find container \"730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec\": container with ID starting with 730fa620346e906759e0a3daabad126f717cc40153ed56c23592125fc42a9eec not found: ID does not exist" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.805555 4708 scope.go:117] "RemoveContainer" containerID="8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39" Feb 27 18:08:46 crc kubenswrapper[4708]: E0227 18:08:46.805950 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39\": container with ID starting with 8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39 not found: ID does not exist" containerID="8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.805986 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39"} err="failed to get container status \"8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39\": rpc error: code = NotFound desc = could not find container \"8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39\": container with ID starting with 8b416698e58dea189276ffe575031b0321e87124a31e1ed6d679fcf28a221d39 not found: ID does not exist" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.806008 4708 scope.go:117] "RemoveContainer" containerID="29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f" Feb 27 18:08:46 crc kubenswrapper[4708]: E0227 18:08:46.806273 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f\": container with ID starting with 29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f not found: ID does not exist" containerID="29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f" Feb 27 18:08:46 crc kubenswrapper[4708]: I0227 18:08:46.806305 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f"} err="failed to get container status \"29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f\": rpc error: code = NotFound desc = could not find container \"29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f\": container with ID starting with 29d3a8ecf51078fb2ef7ff833f4b77b3d40c127720f8e419b14d9e537cae859f not found: ID does not exist" Feb 27 18:08:47 crc kubenswrapper[4708]: I0227 18:08:47.609776 4708 generic.go:334] "Generic (PLEG): container finished" podID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerID="9e0095869b6366d1aea1cbdcca12b231631d4df7e8909b7b9246b55a1b4456c5" exitCode=0 Feb 27 18:08:47 crc kubenswrapper[4708]: I0227 18:08:47.609924 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6n2qm" event={"ID":"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7","Type":"ContainerDied","Data":"9e0095869b6366d1aea1cbdcca12b231631d4df7e8909b7b9246b55a1b4456c5"} Feb 27 18:08:47 crc kubenswrapper[4708]: I0227 18:08:47.610315 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6n2qm" event={"ID":"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7","Type":"ContainerDied","Data":"da5d211df0e13ffdd1750a17dbd82367a3e3973a5b70ccc3d3731f6f30db6294"} Feb 27 18:08:47 crc kubenswrapper[4708]: I0227 18:08:47.610344 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da5d211df0e13ffdd1750a17dbd82367a3e3973a5b70ccc3d3731f6f30db6294" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.162269 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.241089 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" path="/var/lib/kubelet/pods/f9c4090f-cad8-4027-99dc-512d4a41e1bc/volumes" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.270867 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7gzc\" (UniqueName: \"kubernetes.io/projected/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-kube-api-access-j7gzc\") pod \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.271058 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-catalog-content\") pod \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.271291 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-utilities\") pod \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\" (UID: \"6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7\") " Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.272315 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-utilities" (OuterVolumeSpecName: "utilities") pod "6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" (UID: "6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.279913 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-kube-api-access-j7gzc" (OuterVolumeSpecName: "kube-api-access-j7gzc") pod "6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" (UID: "6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7"). InnerVolumeSpecName "kube-api-access-j7gzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.302561 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" (UID: "6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.375366 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7gzc\" (UniqueName: \"kubernetes.io/projected/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-kube-api-access-j7gzc\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.375438 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.375467 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.627503 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6n2qm" Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.681444 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6n2qm"] Feb 27 18:08:48 crc kubenswrapper[4708]: I0227 18:08:48.692720 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6n2qm"] Feb 27 18:08:49 crc kubenswrapper[4708]: E0227 18:08:49.230890 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:08:50 crc kubenswrapper[4708]: I0227 18:08:50.249001 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" path="/var/lib/kubelet/pods/6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7/volumes" Feb 27 18:08:55 crc kubenswrapper[4708]: I0227 18:08:55.229035 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:08:55 crc kubenswrapper[4708]: E0227 18:08:55.230107 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:08:57 crc kubenswrapper[4708]: E0227 18:08:57.231140 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:09:03 crc kubenswrapper[4708]: E0227 18:09:03.235536 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:09:03 crc kubenswrapper[4708]: E0227 18:09:03.236643 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:09:03 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:09:03 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:09:03 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:09:03 crc kubenswrapper[4708]: E0227 18:09:03.237927 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:09:06 crc kubenswrapper[4708]: I0227 18:09:06.228496 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:09:06 crc kubenswrapper[4708]: E0227 18:09:06.229296 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:09:13 crc kubenswrapper[4708]: E0227 18:09:13.125068 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:09:13 crc kubenswrapper[4708]: E0227 18:09:13.125744 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:09:13 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:09:13 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:09:13 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:09:13 crc kubenswrapper[4708]: E0227 18:09:13.127431 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:09:18 crc kubenswrapper[4708]: I0227 18:09:18.229732 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:09:18 crc kubenswrapper[4708]: E0227 18:09:18.230835 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:09:18 crc kubenswrapper[4708]: E0227 18:09:18.232269 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:09:27 crc kubenswrapper[4708]: E0227 18:09:27.231464 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:09:29 crc kubenswrapper[4708]: I0227 18:09:29.229086 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:09:29 crc kubenswrapper[4708]: E0227 18:09:29.229671 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:09:32 crc kubenswrapper[4708]: E0227 18:09:32.243843 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:09:39 crc kubenswrapper[4708]: E0227 18:09:39.230749 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:09:41 crc kubenswrapper[4708]: I0227 18:09:41.229097 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:09:41 crc kubenswrapper[4708]: E0227 18:09:41.229871 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:09:46 crc kubenswrapper[4708]: E0227 18:09:46.231735 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:09:51 crc kubenswrapper[4708]: E0227 18:09:51.232022 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:09:52 crc kubenswrapper[4708]: I0227 18:09:52.241911 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:09:52 crc kubenswrapper[4708]: E0227 18:09:52.243694 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:09:59 crc kubenswrapper[4708]: E0227 18:09:59.230715 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.153228 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536930-d9sgn"] Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154125 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154155 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154179 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="extract-utilities" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154193 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="extract-utilities" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154229 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="extract-utilities" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154242 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="extract-utilities" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154267 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="extract-content" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154280 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="extract-content" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154319 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="extract-utilities" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154331 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="extract-utilities" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154349 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="extract-content" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154362 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="extract-content" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154383 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154394 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154416 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="extract-content" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154428 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="extract-content" Feb 27 18:10:00 crc kubenswrapper[4708]: E0227 18:10:00.154452 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154465 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154806 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c4f2fb46-c9f9-4359-8b5e-f6f68499311f" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154841 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f9c4090f-cad8-4027-99dc-512d4a41e1bc" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.154899 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a2c3a08-cc0b-48a3-a0c8-fdde8c0e2cd7" containerName="registry-server" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.156192 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.169141 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536930-d9sgn"] Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.346131 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dm64\" (UniqueName: \"kubernetes.io/projected/fb343271-5527-4655-973b-f3a35b328fce-kube-api-access-6dm64\") pod \"auto-csr-approver-29536930-d9sgn\" (UID: \"fb343271-5527-4655-973b-f3a35b328fce\") " pod="openshift-infra/auto-csr-approver-29536930-d9sgn" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.447988 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dm64\" (UniqueName: \"kubernetes.io/projected/fb343271-5527-4655-973b-f3a35b328fce-kube-api-access-6dm64\") pod \"auto-csr-approver-29536930-d9sgn\" (UID: \"fb343271-5527-4655-973b-f3a35b328fce\") " pod="openshift-infra/auto-csr-approver-29536930-d9sgn" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.471377 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dm64\" (UniqueName: \"kubernetes.io/projected/fb343271-5527-4655-973b-f3a35b328fce-kube-api-access-6dm64\") pod \"auto-csr-approver-29536930-d9sgn\" (UID: \"fb343271-5527-4655-973b-f3a35b328fce\") " pod="openshift-infra/auto-csr-approver-29536930-d9sgn" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.481056 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" Feb 27 18:10:00 crc kubenswrapper[4708]: I0227 18:10:00.909175 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536930-d9sgn"] Feb 27 18:10:00 crc kubenswrapper[4708]: W0227 18:10:00.916677 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfb343271_5527_4655_973b_f3a35b328fce.slice/crio-3e36e817ad737307f4b7aa27952356ad380b17a3008f14eb670f56a3e8d815ee WatchSource:0}: Error finding container 3e36e817ad737307f4b7aa27952356ad380b17a3008f14eb670f56a3e8d815ee: Status 404 returned error can't find the container with id 3e36e817ad737307f4b7aa27952356ad380b17a3008f14eb670f56a3e8d815ee Feb 27 18:10:01 crc kubenswrapper[4708]: I0227 18:10:01.541254 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" event={"ID":"fb343271-5527-4655-973b-f3a35b328fce","Type":"ContainerStarted","Data":"3e36e817ad737307f4b7aa27952356ad380b17a3008f14eb670f56a3e8d815ee"} Feb 27 18:10:01 crc kubenswrapper[4708]: E0227 18:10:01.805989 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:10:01 crc kubenswrapper[4708]: E0227 18:10:01.806133 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:10:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:10:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dm64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-d9sgn_openshift-infra(fb343271-5527-4655-973b-f3a35b328fce): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:10:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:10:01 crc kubenswrapper[4708]: E0227 18:10:01.807396 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:10:02 crc kubenswrapper[4708]: E0227 18:10:02.563448 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:10:04 crc kubenswrapper[4708]: I0227 18:10:04.228881 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:10:04 crc kubenswrapper[4708]: E0227 18:10:04.229429 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:10:07 crc kubenswrapper[4708]: E0227 18:10:07.178294 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:10:07 crc kubenswrapper[4708]: E0227 18:10:07.179074 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:10:07 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:10:07 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:10:07 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:10:07 crc kubenswrapper[4708]: E0227 18:10:07.180260 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:10:11 crc kubenswrapper[4708]: E0227 18:10:11.231556 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:10:15 crc kubenswrapper[4708]: E0227 18:10:15.344144 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:10:15 crc kubenswrapper[4708]: E0227 18:10:15.344989 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:10:15 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:10:15 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dm64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-d9sgn_openshift-infra(fb343271-5527-4655-973b-f3a35b328fce): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:10:15 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:10:15 crc kubenswrapper[4708]: E0227 18:10:15.346201 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:10:19 crc kubenswrapper[4708]: I0227 18:10:19.229202 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:10:19 crc kubenswrapper[4708]: E0227 18:10:19.231289 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:10:21 crc kubenswrapper[4708]: E0227 18:10:21.230621 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:10:26 crc kubenswrapper[4708]: E0227 18:10:26.231659 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:10:30 crc kubenswrapper[4708]: E0227 18:10:30.230448 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:10:32 crc kubenswrapper[4708]: I0227 18:10:32.234727 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:10:32 crc kubenswrapper[4708]: E0227 18:10:32.235501 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:10:32 crc kubenswrapper[4708]: E0227 18:10:32.238865 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:10:39 crc kubenswrapper[4708]: E0227 18:10:39.232245 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:10:43 crc kubenswrapper[4708]: E0227 18:10:43.117939 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:10:43 crc kubenswrapper[4708]: E0227 18:10:43.119577 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:10:43 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:10:43 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dm64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-d9sgn_openshift-infra(fb343271-5527-4655-973b-f3a35b328fce): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:10:43 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:10:43 crc kubenswrapper[4708]: E0227 18:10:43.120975 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:10:43 crc kubenswrapper[4708]: I0227 18:10:43.228799 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:10:43 crc kubenswrapper[4708]: E0227 18:10:43.229306 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:10:45 crc kubenswrapper[4708]: E0227 18:10:45.230491 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:10:50 crc kubenswrapper[4708]: E0227 18:10:50.230807 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:10:55 crc kubenswrapper[4708]: I0227 18:10:55.229544 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:10:55 crc kubenswrapper[4708]: E0227 18:10:55.230623 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:10:55 crc kubenswrapper[4708]: E0227 18:10:55.231294 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:10:59 crc kubenswrapper[4708]: E0227 18:10:59.229898 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:11:01 crc kubenswrapper[4708]: E0227 18:11:01.230774 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:11:06 crc kubenswrapper[4708]: I0227 18:11:06.229371 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:11:06 crc kubenswrapper[4708]: E0227 18:11:06.230657 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:11:08 crc kubenswrapper[4708]: E0227 18:11:08.231790 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:11:12 crc kubenswrapper[4708]: E0227 18:11:12.240355 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:11:15 crc kubenswrapper[4708]: E0227 18:11:15.230968 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:11:19 crc kubenswrapper[4708]: I0227 18:11:19.229933 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:11:19 crc kubenswrapper[4708]: E0227 18:11:19.231620 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:11:20 crc kubenswrapper[4708]: E0227 18:11:20.232381 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:11:28 crc kubenswrapper[4708]: E0227 18:11:28.200167 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:11:28 crc kubenswrapper[4708]: E0227 18:11:28.200910 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:11:28 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:11:28 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:11:28 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:11:28 crc kubenswrapper[4708]: E0227 18:11:28.202427 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:11:30 crc kubenswrapper[4708]: E0227 18:11:30.230942 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:11:33 crc kubenswrapper[4708]: E0227 18:11:33.162331 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:11:33 crc kubenswrapper[4708]: E0227 18:11:33.163344 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:11:33 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:11:33 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dm64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-d9sgn_openshift-infra(fb343271-5527-4655-973b-f3a35b328fce): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:11:33 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:11:33 crc kubenswrapper[4708]: E0227 18:11:33.165027 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:11:33 crc kubenswrapper[4708]: I0227 18:11:33.229452 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:11:33 crc kubenswrapper[4708]: E0227 18:11:33.230147 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:11:41 crc kubenswrapper[4708]: E0227 18:11:41.242069 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:11:44 crc kubenswrapper[4708]: E0227 18:11:44.231304 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:11:45 crc kubenswrapper[4708]: E0227 18:11:45.257121 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:11:45 crc kubenswrapper[4708]: E0227 18:11:45.257271 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:11:45 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:11:45 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:11:45 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:11:45 crc kubenswrapper[4708]: E0227 18:11:45.258449 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:11:46 crc kubenswrapper[4708]: I0227 18:11:46.228986 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:11:46 crc kubenswrapper[4708]: E0227 18:11:46.229822 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:11:53 crc kubenswrapper[4708]: E0227 18:11:53.231186 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:11:58 crc kubenswrapper[4708]: I0227 18:11:58.229890 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:11:58 crc kubenswrapper[4708]: E0227 18:11:58.230670 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:11:59 crc kubenswrapper[4708]: E0227 18:11:59.230754 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.165441 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536932-mq92q"] Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.167540 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-mq92q" Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.175808 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536932-mq92q"] Feb 27 18:12:00 crc kubenswrapper[4708]: E0227 18:12:00.230506 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.257634 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjgq4\" (UniqueName: \"kubernetes.io/projected/69db41fd-5c38-4d0a-8999-f8b595f26b06-kube-api-access-jjgq4\") pod \"auto-csr-approver-29536932-mq92q\" (UID: \"69db41fd-5c38-4d0a-8999-f8b595f26b06\") " pod="openshift-infra/auto-csr-approver-29536932-mq92q" Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.359982 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjgq4\" (UniqueName: \"kubernetes.io/projected/69db41fd-5c38-4d0a-8999-f8b595f26b06-kube-api-access-jjgq4\") pod \"auto-csr-approver-29536932-mq92q\" (UID: \"69db41fd-5c38-4d0a-8999-f8b595f26b06\") " pod="openshift-infra/auto-csr-approver-29536932-mq92q" Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.383424 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjgq4\" (UniqueName: \"kubernetes.io/projected/69db41fd-5c38-4d0a-8999-f8b595f26b06-kube-api-access-jjgq4\") pod \"auto-csr-approver-29536932-mq92q\" (UID: \"69db41fd-5c38-4d0a-8999-f8b595f26b06\") " pod="openshift-infra/auto-csr-approver-29536932-mq92q" Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.493868 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-mq92q" Feb 27 18:12:00 crc kubenswrapper[4708]: W0227 18:12:00.944258 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod69db41fd_5c38_4d0a_8999_f8b595f26b06.slice/crio-47019f991a4534d2e2dcad9c848cd7960d8e5a3b603040930fa0d34d05f9aa4d WatchSource:0}: Error finding container 47019f991a4534d2e2dcad9c848cd7960d8e5a3b603040930fa0d34d05f9aa4d: Status 404 returned error can't find the container with id 47019f991a4534d2e2dcad9c848cd7960d8e5a3b603040930fa0d34d05f9aa4d Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.951938 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536932-mq92q"] Feb 27 18:12:00 crc kubenswrapper[4708]: I0227 18:12:00.953411 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:12:01 crc kubenswrapper[4708]: E0227 18:12:01.816801 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:12:01 crc kubenswrapper[4708]: E0227 18:12:01.817022 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:12:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:12:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536932-mq92q_openshift-infra(69db41fd-5c38-4d0a-8999-f8b595f26b06): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:12:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:12:01 crc kubenswrapper[4708]: E0227 18:12:01.819046 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536932-mq92q" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" Feb 27 18:12:01 crc kubenswrapper[4708]: I0227 18:12:01.960723 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536932-mq92q" event={"ID":"69db41fd-5c38-4d0a-8999-f8b595f26b06","Type":"ContainerStarted","Data":"47019f991a4534d2e2dcad9c848cd7960d8e5a3b603040930fa0d34d05f9aa4d"} Feb 27 18:12:01 crc kubenswrapper[4708]: E0227 18:12:01.963408 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536932-mq92q" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" Feb 27 18:12:02 crc kubenswrapper[4708]: E0227 18:12:02.988187 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536932-mq92q" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" Feb 27 18:12:05 crc kubenswrapper[4708]: E0227 18:12:05.231997 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:12:10 crc kubenswrapper[4708]: E0227 18:12:10.232373 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:12:11 crc kubenswrapper[4708]: E0227 18:12:11.230120 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:12:12 crc kubenswrapper[4708]: I0227 18:12:12.241069 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:12:12 crc kubenswrapper[4708]: E0227 18:12:12.241907 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:12:14 crc kubenswrapper[4708]: E0227 18:12:14.096067 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:12:14 crc kubenswrapper[4708]: E0227 18:12:14.096268 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:12:14 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:12:14 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-jjgq4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536932-mq92q_openshift-infra(69db41fd-5c38-4d0a-8999-f8b595f26b06): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:12:14 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:12:14 crc kubenswrapper[4708]: E0227 18:12:14.097481 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536932-mq92q" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" Feb 27 18:12:19 crc kubenswrapper[4708]: E0227 18:12:19.230758 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:12:21 crc kubenswrapper[4708]: E0227 18:12:21.230478 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:12:22 crc kubenswrapper[4708]: E0227 18:12:22.244058 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:12:23 crc kubenswrapper[4708]: I0227 18:12:23.229948 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:12:23 crc kubenswrapper[4708]: E0227 18:12:23.230407 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:12:29 crc kubenswrapper[4708]: E0227 18:12:29.230835 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536932-mq92q" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" Feb 27 18:12:33 crc kubenswrapper[4708]: E0227 18:12:33.232623 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:12:35 crc kubenswrapper[4708]: I0227 18:12:35.228580 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:12:35 crc kubenswrapper[4708]: E0227 18:12:35.229188 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:12:35 crc kubenswrapper[4708]: E0227 18:12:35.231901 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:12:35 crc kubenswrapper[4708]: E0227 18:12:35.232302 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:12:43 crc kubenswrapper[4708]: I0227 18:12:43.463238 4708 generic.go:334] "Generic (PLEG): container finished" podID="69db41fd-5c38-4d0a-8999-f8b595f26b06" containerID="73a73e92dc52984b37d2e83e1a23772a3224ac59bca23ec290e0fd574f9c5c98" exitCode=0 Feb 27 18:12:43 crc kubenswrapper[4708]: I0227 18:12:43.463378 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536932-mq92q" event={"ID":"69db41fd-5c38-4d0a-8999-f8b595f26b06","Type":"ContainerDied","Data":"73a73e92dc52984b37d2e83e1a23772a3224ac59bca23ec290e0fd574f9c5c98"} Feb 27 18:12:44 crc kubenswrapper[4708]: I0227 18:12:44.993827 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-mq92q" Feb 27 18:12:45 crc kubenswrapper[4708]: I0227 18:12:45.157876 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjgq4\" (UniqueName: \"kubernetes.io/projected/69db41fd-5c38-4d0a-8999-f8b595f26b06-kube-api-access-jjgq4\") pod \"69db41fd-5c38-4d0a-8999-f8b595f26b06\" (UID: \"69db41fd-5c38-4d0a-8999-f8b595f26b06\") " Feb 27 18:12:45 crc kubenswrapper[4708]: I0227 18:12:45.165732 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/69db41fd-5c38-4d0a-8999-f8b595f26b06-kube-api-access-jjgq4" (OuterVolumeSpecName: "kube-api-access-jjgq4") pod "69db41fd-5c38-4d0a-8999-f8b595f26b06" (UID: "69db41fd-5c38-4d0a-8999-f8b595f26b06"). InnerVolumeSpecName "kube-api-access-jjgq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:12:45 crc kubenswrapper[4708]: I0227 18:12:45.262948 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjgq4\" (UniqueName: \"kubernetes.io/projected/69db41fd-5c38-4d0a-8999-f8b595f26b06-kube-api-access-jjgq4\") on node \"crc\" DevicePath \"\"" Feb 27 18:12:45 crc kubenswrapper[4708]: I0227 18:12:45.490491 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536932-mq92q" event={"ID":"69db41fd-5c38-4d0a-8999-f8b595f26b06","Type":"ContainerDied","Data":"47019f991a4534d2e2dcad9c848cd7960d8e5a3b603040930fa0d34d05f9aa4d"} Feb 27 18:12:45 crc kubenswrapper[4708]: I0227 18:12:45.490550 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47019f991a4534d2e2dcad9c848cd7960d8e5a3b603040930fa0d34d05f9aa4d" Feb 27 18:12:45 crc kubenswrapper[4708]: I0227 18:12:45.490670 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536932-mq92q" Feb 27 18:12:46 crc kubenswrapper[4708]: I0227 18:12:46.089355 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-qvr9v"] Feb 27 18:12:46 crc kubenswrapper[4708]: I0227 18:12:46.102464 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536920-qvr9v"] Feb 27 18:12:46 crc kubenswrapper[4708]: I0227 18:12:46.247723 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37134f16-3dd4-4d15-8848-5674ca11e392" path="/var/lib/kubelet/pods/37134f16-3dd4-4d15-8848-5674ca11e392/volumes" Feb 27 18:12:47 crc kubenswrapper[4708]: E0227 18:12:47.238519 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:12:48 crc kubenswrapper[4708]: E0227 18:12:48.230993 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:12:49 crc kubenswrapper[4708]: I0227 18:12:49.229005 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:12:49 crc kubenswrapper[4708]: E0227 18:12:49.230906 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:12:49 crc kubenswrapper[4708]: I0227 18:12:49.548899 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"22ded62c57513f4a94873c5f2f7942c7a83d9f03a56582087a9e2ac2ff8ceafc"} Feb 27 18:13:00 crc kubenswrapper[4708]: E0227 18:13:00.231352 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:13:00 crc kubenswrapper[4708]: E0227 18:13:00.293627 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:13:00 crc kubenswrapper[4708]: E0227 18:13:00.293792 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:13:00 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:13:00 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dm64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-d9sgn_openshift-infra(fb343271-5527-4655-973b-f3a35b328fce): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:13:00 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:13:00 crc kubenswrapper[4708]: E0227 18:13:00.295045 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:13:03 crc kubenswrapper[4708]: E0227 18:13:03.231659 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:13:03 crc kubenswrapper[4708]: I0227 18:13:03.973108 4708 scope.go:117] "RemoveContainer" containerID="6d4326b7f75356fb2aea4e833eda1c6f545ac34d2fe41355c0bcce38a03786cc" Feb 27 18:13:12 crc kubenswrapper[4708]: E0227 18:13:12.247338 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:13:13 crc kubenswrapper[4708]: E0227 18:13:13.232131 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:13:16 crc kubenswrapper[4708]: E0227 18:13:16.230375 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:13:26 crc kubenswrapper[4708]: E0227 18:13:26.232945 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:13:27 crc kubenswrapper[4708]: E0227 18:13:27.230307 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:13:29 crc kubenswrapper[4708]: E0227 18:13:29.231899 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:13:38 crc kubenswrapper[4708]: E0227 18:13:38.232428 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:13:38 crc kubenswrapper[4708]: E0227 18:13:38.232598 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.205026 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w8gv2"] Feb 27 18:13:40 crc kubenswrapper[4708]: E0227 18:13:40.205827 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" containerName="oc" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.205884 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" containerName="oc" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.206178 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" containerName="oc" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.208013 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.265961 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w8gv2"] Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.274243 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-utilities\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.274301 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzc94\" (UniqueName: \"kubernetes.io/projected/0d804e14-1e10-42fa-aa61-befd676f2556-kube-api-access-fzc94\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.274339 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-catalog-content\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.376247 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-utilities\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.376315 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fzc94\" (UniqueName: \"kubernetes.io/projected/0d804e14-1e10-42fa-aa61-befd676f2556-kube-api-access-fzc94\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.376354 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-catalog-content\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.376715 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-utilities\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.376814 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-catalog-content\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.737549 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fzc94\" (UniqueName: \"kubernetes.io/projected/0d804e14-1e10-42fa-aa61-befd676f2556-kube-api-access-fzc94\") pod \"redhat-operators-w8gv2\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:40 crc kubenswrapper[4708]: I0227 18:13:40.852526 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:13:41 crc kubenswrapper[4708]: E0227 18:13:41.230428 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:13:41 crc kubenswrapper[4708]: W0227 18:13:41.330110 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d804e14_1e10_42fa_aa61_befd676f2556.slice/crio-77ffac012e5734737bdedd16604548dd9b9328dc63ce2e04dd57c2401646e8c6 WatchSource:0}: Error finding container 77ffac012e5734737bdedd16604548dd9b9328dc63ce2e04dd57c2401646e8c6: Status 404 returned error can't find the container with id 77ffac012e5734737bdedd16604548dd9b9328dc63ce2e04dd57c2401646e8c6 Feb 27 18:13:41 crc kubenswrapper[4708]: I0227 18:13:41.332335 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w8gv2"] Feb 27 18:13:42 crc kubenswrapper[4708]: I0227 18:13:42.197942 4708 generic.go:334] "Generic (PLEG): container finished" podID="0d804e14-1e10-42fa-aa61-befd676f2556" containerID="d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85" exitCode=0 Feb 27 18:13:42 crc kubenswrapper[4708]: I0227 18:13:42.198060 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8gv2" event={"ID":"0d804e14-1e10-42fa-aa61-befd676f2556","Type":"ContainerDied","Data":"d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85"} Feb 27 18:13:42 crc kubenswrapper[4708]: I0227 18:13:42.198416 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8gv2" event={"ID":"0d804e14-1e10-42fa-aa61-befd676f2556","Type":"ContainerStarted","Data":"77ffac012e5734737bdedd16604548dd9b9328dc63ce2e04dd57c2401646e8c6"} Feb 27 18:13:43 crc kubenswrapper[4708]: E0227 18:13:43.001971 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:13:43 crc kubenswrapper[4708]: E0227 18:13:43.002409 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzc94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-w8gv2_openshift-marketplace(0d804e14-1e10-42fa-aa61-befd676f2556): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:13:43 crc kubenswrapper[4708]: E0227 18:13:43.004125 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" Feb 27 18:13:43 crc kubenswrapper[4708]: E0227 18:13:43.213625 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" Feb 27 18:13:52 crc kubenswrapper[4708]: E0227 18:13:52.243131 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:13:53 crc kubenswrapper[4708]: E0227 18:13:53.231634 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:13:56 crc kubenswrapper[4708]: E0227 18:13:56.231053 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:13:57 crc kubenswrapper[4708]: E0227 18:13:57.918259 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:13:57 crc kubenswrapper[4708]: E0227 18:13:57.918758 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fzc94,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-w8gv2_openshift-marketplace(0d804e14-1e10-42fa-aa61-befd676f2556): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:13:57 crc kubenswrapper[4708]: E0227 18:13:57.920144 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" Feb 27 18:14:00 crc kubenswrapper[4708]: I0227 18:14:00.168395 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536934-qjmvw"] Feb 27 18:14:00 crc kubenswrapper[4708]: I0227 18:14:00.171159 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" Feb 27 18:14:00 crc kubenswrapper[4708]: I0227 18:14:00.190783 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536934-qjmvw"] Feb 27 18:14:00 crc kubenswrapper[4708]: I0227 18:14:00.246498 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmkn7\" (UniqueName: \"kubernetes.io/projected/b35a5adf-48a7-4e39-9491-c45f9b71b9b7-kube-api-access-xmkn7\") pod \"auto-csr-approver-29536934-qjmvw\" (UID: \"b35a5adf-48a7-4e39-9491-c45f9b71b9b7\") " pod="openshift-infra/auto-csr-approver-29536934-qjmvw" Feb 27 18:14:00 crc kubenswrapper[4708]: I0227 18:14:00.350346 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmkn7\" (UniqueName: \"kubernetes.io/projected/b35a5adf-48a7-4e39-9491-c45f9b71b9b7-kube-api-access-xmkn7\") pod \"auto-csr-approver-29536934-qjmvw\" (UID: \"b35a5adf-48a7-4e39-9491-c45f9b71b9b7\") " pod="openshift-infra/auto-csr-approver-29536934-qjmvw" Feb 27 18:14:00 crc kubenswrapper[4708]: I0227 18:14:00.377151 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmkn7\" (UniqueName: \"kubernetes.io/projected/b35a5adf-48a7-4e39-9491-c45f9b71b9b7-kube-api-access-xmkn7\") pod \"auto-csr-approver-29536934-qjmvw\" (UID: \"b35a5adf-48a7-4e39-9491-c45f9b71b9b7\") " pod="openshift-infra/auto-csr-approver-29536934-qjmvw" Feb 27 18:14:00 crc kubenswrapper[4708]: I0227 18:14:00.495931 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" Feb 27 18:14:00 crc kubenswrapper[4708]: W0227 18:14:00.999660 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb35a5adf_48a7_4e39_9491_c45f9b71b9b7.slice/crio-e53b1701631ba7af6f67f7d13168fecb912ef607a26a7cebe118618059cda574 WatchSource:0}: Error finding container e53b1701631ba7af6f67f7d13168fecb912ef607a26a7cebe118618059cda574: Status 404 returned error can't find the container with id e53b1701631ba7af6f67f7d13168fecb912ef607a26a7cebe118618059cda574 Feb 27 18:14:01 crc kubenswrapper[4708]: I0227 18:14:01.017005 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536934-qjmvw"] Feb 27 18:14:01 crc kubenswrapper[4708]: I0227 18:14:01.424108 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" event={"ID":"b35a5adf-48a7-4e39-9491-c45f9b71b9b7","Type":"ContainerStarted","Data":"e53b1701631ba7af6f67f7d13168fecb912ef607a26a7cebe118618059cda574"} Feb 27 18:14:01 crc kubenswrapper[4708]: E0227 18:14:01.963820 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:14:01 crc kubenswrapper[4708]: E0227 18:14:01.964400 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:14:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:14:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmkn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-qjmvw_openshift-infra(b35a5adf-48a7-4e39-9491-c45f9b71b9b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:14:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:14:01 crc kubenswrapper[4708]: E0227 18:14:01.965637 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:14:02 crc kubenswrapper[4708]: E0227 18:14:02.437479 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:14:04 crc kubenswrapper[4708]: I0227 18:14:04.067469 4708 scope.go:117] "RemoveContainer" containerID="9dc28a406132f190f86ffc1e1fdff2988972b9b2332deba38690f2367cc0b334" Feb 27 18:14:05 crc kubenswrapper[4708]: E0227 18:14:05.231402 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:14:06 crc kubenswrapper[4708]: E0227 18:14:06.231940 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:14:09 crc kubenswrapper[4708]: E0227 18:14:09.083926 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:14:09 crc kubenswrapper[4708]: E0227 18:14:09.084688 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:14:09 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:14:09 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:14:09 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:14:09 crc kubenswrapper[4708]: E0227 18:14:09.085930 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:14:09 crc kubenswrapper[4708]: E0227 18:14:09.231037 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" Feb 27 18:14:16 crc kubenswrapper[4708]: E0227 18:14:16.231872 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:14:18 crc kubenswrapper[4708]: E0227 18:14:18.417252 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:14:18 crc kubenswrapper[4708]: E0227 18:14:18.417799 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:14:18 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:14:18 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmkn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-qjmvw_openshift-infra(b35a5adf-48a7-4e39-9491-c45f9b71b9b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:14:18 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:14:18 crc kubenswrapper[4708]: E0227 18:14:18.419081 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:14:21 crc kubenswrapper[4708]: E0227 18:14:21.232949 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:14:21 crc kubenswrapper[4708]: E0227 18:14:21.232990 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:14:24 crc kubenswrapper[4708]: I0227 18:14:24.718726 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8gv2" event={"ID":"0d804e14-1e10-42fa-aa61-befd676f2556","Type":"ContainerStarted","Data":"1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b"} Feb 27 18:14:29 crc kubenswrapper[4708]: E0227 18:14:29.230505 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:14:29 crc kubenswrapper[4708]: E0227 18:14:29.230688 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:14:29 crc kubenswrapper[4708]: I0227 18:14:29.772437 4708 generic.go:334] "Generic (PLEG): container finished" podID="0d804e14-1e10-42fa-aa61-befd676f2556" containerID="1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b" exitCode=0 Feb 27 18:14:29 crc kubenswrapper[4708]: I0227 18:14:29.772482 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8gv2" event={"ID":"0d804e14-1e10-42fa-aa61-befd676f2556","Type":"ContainerDied","Data":"1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b"} Feb 27 18:14:30 crc kubenswrapper[4708]: I0227 18:14:30.793629 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8gv2" event={"ID":"0d804e14-1e10-42fa-aa61-befd676f2556","Type":"ContainerStarted","Data":"af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158"} Feb 27 18:14:30 crc kubenswrapper[4708]: I0227 18:14:30.832270 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w8gv2" podStartSLOduration=2.799836516 podStartE2EDuration="50.832244561s" podCreationTimestamp="2026-02-27 18:13:40 +0000 UTC" firstStartedPulling="2026-02-27 18:13:42.201479087 +0000 UTC m=+4820.717276674" lastFinishedPulling="2026-02-27 18:14:30.233887092 +0000 UTC m=+4868.749684719" observedRunningTime="2026-02-27 18:14:30.817523697 +0000 UTC m=+4869.333321324" watchObservedRunningTime="2026-02-27 18:14:30.832244561 +0000 UTC m=+4869.348042178" Feb 27 18:14:30 crc kubenswrapper[4708]: I0227 18:14:30.853449 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:14:30 crc kubenswrapper[4708]: I0227 18:14:30.853501 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:14:31 crc kubenswrapper[4708]: I0227 18:14:31.913052 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="registry-server" probeResult="failure" output=< Feb 27 18:14:31 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:14:31 crc kubenswrapper[4708]: > Feb 27 18:14:33 crc kubenswrapper[4708]: E0227 18:14:33.230825 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:14:34 crc kubenswrapper[4708]: E0227 18:14:34.232734 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:14:40 crc kubenswrapper[4708]: E0227 18:14:40.230873 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:14:41 crc kubenswrapper[4708]: E0227 18:14:41.062293 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:14:41 crc kubenswrapper[4708]: E0227 18:14:41.062797 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:14:41 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:14:41 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmkn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-qjmvw_openshift-infra(b35a5adf-48a7-4e39-9491-c45f9b71b9b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:14:41 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:14:41 crc kubenswrapper[4708]: E0227 18:14:41.064094 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:14:41 crc kubenswrapper[4708]: I0227 18:14:41.938000 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="registry-server" probeResult="failure" output=< Feb 27 18:14:41 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:14:41 crc kubenswrapper[4708]: > Feb 27 18:14:44 crc kubenswrapper[4708]: E0227 18:14:44.238037 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:14:49 crc kubenswrapper[4708]: E0227 18:14:49.232533 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:14:51 crc kubenswrapper[4708]: I0227 18:14:51.909403 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="registry-server" probeResult="failure" output=< Feb 27 18:14:51 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:14:51 crc kubenswrapper[4708]: > Feb 27 18:14:52 crc kubenswrapper[4708]: E0227 18:14:52.236665 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:14:53 crc kubenswrapper[4708]: E0227 18:14:53.230746 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:14:55 crc kubenswrapper[4708]: E0227 18:14:55.231770 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.162345 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8"] Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.165662 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.171357 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.175758 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8"] Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.180005 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.284105 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f784ada7-bb58-4319-afd9-fb504136a164-config-volume\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.284342 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f784ada7-bb58-4319-afd9-fb504136a164-secret-volume\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.284387 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcmd8\" (UniqueName: \"kubernetes.io/projected/f784ada7-bb58-4319-afd9-fb504136a164-kube-api-access-lcmd8\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.386603 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f784ada7-bb58-4319-afd9-fb504136a164-secret-volume\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.386720 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcmd8\" (UniqueName: \"kubernetes.io/projected/f784ada7-bb58-4319-afd9-fb504136a164-kube-api-access-lcmd8\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.386906 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f784ada7-bb58-4319-afd9-fb504136a164-config-volume\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.388501 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f784ada7-bb58-4319-afd9-fb504136a164-config-volume\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.395143 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f784ada7-bb58-4319-afd9-fb504136a164-secret-volume\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.405128 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcmd8\" (UniqueName: \"kubernetes.io/projected/f784ada7-bb58-4319-afd9-fb504136a164-kube-api-access-lcmd8\") pod \"collect-profiles-29536935-88fh8\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.494790 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.907659 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:15:00 crc kubenswrapper[4708]: I0227 18:15:00.970975 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:15:01 crc kubenswrapper[4708]: I0227 18:15:01.012731 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8"] Feb 27 18:15:01 crc kubenswrapper[4708]: I0227 18:15:01.145135 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w8gv2"] Feb 27 18:15:01 crc kubenswrapper[4708]: I0227 18:15:01.209831 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" event={"ID":"f784ada7-bb58-4319-afd9-fb504136a164","Type":"ContainerStarted","Data":"dba1145866de3fa9a54ace61a6affcc25c3c17af261a2377a8e7b7ace5e3ec2c"} Feb 27 18:15:01 crc kubenswrapper[4708]: I0227 18:15:01.209906 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" event={"ID":"f784ada7-bb58-4319-afd9-fb504136a164","Type":"ContainerStarted","Data":"ab6a7ba63a4d08ce93fcda7112a214bc31552b5da8d8e8dfd4199f3ecf1a211e"} Feb 27 18:15:01 crc kubenswrapper[4708]: E0227 18:15:01.231896 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:15:01 crc kubenswrapper[4708]: I0227 18:15:01.231939 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" podStartSLOduration=1.231922484 podStartE2EDuration="1.231922484s" podCreationTimestamp="2026-02-27 18:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:15:01.230159194 +0000 UTC m=+4899.745956791" watchObservedRunningTime="2026-02-27 18:15:01.231922484 +0000 UTC m=+4899.747720081" Feb 27 18:15:02 crc kubenswrapper[4708]: I0227 18:15:02.224984 4708 generic.go:334] "Generic (PLEG): container finished" podID="f784ada7-bb58-4319-afd9-fb504136a164" containerID="dba1145866de3fa9a54ace61a6affcc25c3c17af261a2377a8e7b7ace5e3ec2c" exitCode=0 Feb 27 18:15:02 crc kubenswrapper[4708]: I0227 18:15:02.225040 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" event={"ID":"f784ada7-bb58-4319-afd9-fb504136a164","Type":"ContainerDied","Data":"dba1145866de3fa9a54ace61a6affcc25c3c17af261a2377a8e7b7ace5e3ec2c"} Feb 27 18:15:02 crc kubenswrapper[4708]: I0227 18:15:02.225834 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w8gv2" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="registry-server" containerID="cri-o://af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158" gracePeriod=2 Feb 27 18:15:02 crc kubenswrapper[4708]: E0227 18:15:02.958796 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0d804e14_1e10_42fa_aa61_befd676f2556.slice/crio-conmon-af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158.scope\": RecentStats: unable to find data in memory cache]" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.113944 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.167344 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-utilities\") pod \"0d804e14-1e10-42fa-aa61-befd676f2556\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.167466 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-catalog-content\") pod \"0d804e14-1e10-42fa-aa61-befd676f2556\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.167587 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fzc94\" (UniqueName: \"kubernetes.io/projected/0d804e14-1e10-42fa-aa61-befd676f2556-kube-api-access-fzc94\") pod \"0d804e14-1e10-42fa-aa61-befd676f2556\" (UID: \"0d804e14-1e10-42fa-aa61-befd676f2556\") " Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.168257 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-utilities" (OuterVolumeSpecName: "utilities") pod "0d804e14-1e10-42fa-aa61-befd676f2556" (UID: "0d804e14-1e10-42fa-aa61-befd676f2556"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.176113 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d804e14-1e10-42fa-aa61-befd676f2556-kube-api-access-fzc94" (OuterVolumeSpecName: "kube-api-access-fzc94") pod "0d804e14-1e10-42fa-aa61-befd676f2556" (UID: "0d804e14-1e10-42fa-aa61-befd676f2556"). InnerVolumeSpecName "kube-api-access-fzc94". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.252763 4708 generic.go:334] "Generic (PLEG): container finished" podID="0d804e14-1e10-42fa-aa61-befd676f2556" containerID="af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158" exitCode=0 Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.252871 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8gv2" event={"ID":"0d804e14-1e10-42fa-aa61-befd676f2556","Type":"ContainerDied","Data":"af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158"} Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.252946 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w8gv2" event={"ID":"0d804e14-1e10-42fa-aa61-befd676f2556","Type":"ContainerDied","Data":"77ffac012e5734737bdedd16604548dd9b9328dc63ce2e04dd57c2401646e8c6"} Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.252977 4708 scope.go:117] "RemoveContainer" containerID="af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.253238 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w8gv2" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.271183 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.271208 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fzc94\" (UniqueName: \"kubernetes.io/projected/0d804e14-1e10-42fa-aa61-befd676f2556-kube-api-access-fzc94\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.281567 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0d804e14-1e10-42fa-aa61-befd676f2556" (UID: "0d804e14-1e10-42fa-aa61-befd676f2556"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.281809 4708 scope.go:117] "RemoveContainer" containerID="1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.307340 4708 scope.go:117] "RemoveContainer" containerID="d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.327516 4708 scope.go:117] "RemoveContainer" containerID="af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158" Feb 27 18:15:03 crc kubenswrapper[4708]: E0227 18:15:03.328005 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158\": container with ID starting with af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158 not found: ID does not exist" containerID="af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.328057 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158"} err="failed to get container status \"af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158\": rpc error: code = NotFound desc = could not find container \"af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158\": container with ID starting with af92582d4a4f259d3b1387348f90e0b2feab4128812d53a393a80315d4bb9158 not found: ID does not exist" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.328099 4708 scope.go:117] "RemoveContainer" containerID="1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b" Feb 27 18:15:03 crc kubenswrapper[4708]: E0227 18:15:03.328395 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b\": container with ID starting with 1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b not found: ID does not exist" containerID="1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.328435 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b"} err="failed to get container status \"1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b\": rpc error: code = NotFound desc = could not find container \"1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b\": container with ID starting with 1002e329be95a3ea947544081b01b6a1c5908bc48c83d14a4805355e0581282b not found: ID does not exist" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.328461 4708 scope.go:117] "RemoveContainer" containerID="d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85" Feb 27 18:15:03 crc kubenswrapper[4708]: E0227 18:15:03.328736 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85\": container with ID starting with d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85 not found: ID does not exist" containerID="d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.328775 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85"} err="failed to get container status \"d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85\": rpc error: code = NotFound desc = could not find container \"d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85\": container with ID starting with d61839d9e4f28e8cd73ef4b2875d52b193b7e1a13c35c88642d4e8ae7c3bee85 not found: ID does not exist" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.373740 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0d804e14-1e10-42fa-aa61-befd676f2556-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.528601 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.576203 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcmd8\" (UniqueName: \"kubernetes.io/projected/f784ada7-bb58-4319-afd9-fb504136a164-kube-api-access-lcmd8\") pod \"f784ada7-bb58-4319-afd9-fb504136a164\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.576390 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f784ada7-bb58-4319-afd9-fb504136a164-secret-volume\") pod \"f784ada7-bb58-4319-afd9-fb504136a164\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.576454 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f784ada7-bb58-4319-afd9-fb504136a164-config-volume\") pod \"f784ada7-bb58-4319-afd9-fb504136a164\" (UID: \"f784ada7-bb58-4319-afd9-fb504136a164\") " Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.576996 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f784ada7-bb58-4319-afd9-fb504136a164-config-volume" (OuterVolumeSpecName: "config-volume") pod "f784ada7-bb58-4319-afd9-fb504136a164" (UID: "f784ada7-bb58-4319-afd9-fb504136a164"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.579825 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f784ada7-bb58-4319-afd9-fb504136a164-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f784ada7-bb58-4319-afd9-fb504136a164" (UID: "f784ada7-bb58-4319-afd9-fb504136a164"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.581488 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f784ada7-bb58-4319-afd9-fb504136a164-kube-api-access-lcmd8" (OuterVolumeSpecName: "kube-api-access-lcmd8") pod "f784ada7-bb58-4319-afd9-fb504136a164" (UID: "f784ada7-bb58-4319-afd9-fb504136a164"). InnerVolumeSpecName "kube-api-access-lcmd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.665089 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w8gv2"] Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.674015 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w8gv2"] Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.678462 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcmd8\" (UniqueName: \"kubernetes.io/projected/f784ada7-bb58-4319-afd9-fb504136a164-kube-api-access-lcmd8\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.678488 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f784ada7-bb58-4319-afd9-fb504136a164-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:03 crc kubenswrapper[4708]: I0227 18:15:03.678498 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f784ada7-bb58-4319-afd9-fb504136a164-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.136264 4708 scope.go:117] "RemoveContainer" containerID="9e0095869b6366d1aea1cbdcca12b231631d4df7e8909b7b9246b55a1b4456c5" Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.156209 4708 scope.go:117] "RemoveContainer" containerID="fca67efbf9aefb05764f6a516f364282dcfef6c21769e8645b9a57f1b476e7ee" Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.247364 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" path="/var/lib/kubelet/pods/0d804e14-1e10-42fa-aa61-befd676f2556/volumes" Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.269724 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" event={"ID":"f784ada7-bb58-4319-afd9-fb504136a164","Type":"ContainerDied","Data":"ab6a7ba63a4d08ce93fcda7112a214bc31552b5da8d8e8dfd4199f3ecf1a211e"} Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.269795 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab6a7ba63a4d08ce93fcda7112a214bc31552b5da8d8e8dfd4199f3ecf1a211e" Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.269946 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8" Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.326016 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn"] Feb 27 18:15:04 crc kubenswrapper[4708]: I0227 18:15:04.339150 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536890-knvxn"] Feb 27 18:15:04 crc kubenswrapper[4708]: E0227 18:15:04.636710 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:15:05 crc kubenswrapper[4708]: I0227 18:15:05.631961 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:15:05 crc kubenswrapper[4708]: I0227 18:15:05.632579 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:15:06 crc kubenswrapper[4708]: E0227 18:15:06.230705 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:15:06 crc kubenswrapper[4708]: I0227 18:15:06.253612 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="085aa630-d7eb-49b7-8f73-7291681011e7" path="/var/lib/kubelet/pods/085aa630-d7eb-49b7-8f73-7291681011e7/volumes" Feb 27 18:15:07 crc kubenswrapper[4708]: E0227 18:15:07.229461 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:15:16 crc kubenswrapper[4708]: E0227 18:15:16.231381 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:15:17 crc kubenswrapper[4708]: E0227 18:15:17.232168 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:15:20 crc kubenswrapper[4708]: E0227 18:15:20.233127 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:15:20 crc kubenswrapper[4708]: E0227 18:15:20.233207 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:15:27 crc kubenswrapper[4708]: E0227 18:15:27.230354 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:15:30 crc kubenswrapper[4708]: E0227 18:15:30.229842 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:15:32 crc kubenswrapper[4708]: E0227 18:15:32.490947 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:15:32 crc kubenswrapper[4708]: E0227 18:15:32.491350 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:15:32 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:15:32 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmkn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-qjmvw_openshift-infra(b35a5adf-48a7-4e39-9491-c45f9b71b9b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:15:32 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:15:32 crc kubenswrapper[4708]: E0227 18:15:32.492526 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:15:34 crc kubenswrapper[4708]: E0227 18:15:34.230496 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:15:35 crc kubenswrapper[4708]: I0227 18:15:35.631746 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:15:35 crc kubenswrapper[4708]: I0227 18:15:35.632660 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:15:43 crc kubenswrapper[4708]: E0227 18:15:43.230776 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:15:45 crc kubenswrapper[4708]: E0227 18:15:45.230547 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:15:48 crc kubenswrapper[4708]: E0227 18:15:48.230264 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:15:56 crc kubenswrapper[4708]: E0227 18:15:56.232171 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:15:57 crc kubenswrapper[4708]: E0227 18:15:57.230040 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.163665 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536936-jdswd"] Feb 27 18:16:00 crc kubenswrapper[4708]: E0227 18:16:00.164736 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="extract-utilities" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.165061 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="extract-utilities" Feb 27 18:16:00 crc kubenswrapper[4708]: E0227 18:16:00.165099 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="extract-content" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.165111 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="extract-content" Feb 27 18:16:00 crc kubenswrapper[4708]: E0227 18:16:00.165141 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="registry-server" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.165154 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="registry-server" Feb 27 18:16:00 crc kubenswrapper[4708]: E0227 18:16:00.165184 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f784ada7-bb58-4319-afd9-fb504136a164" containerName="collect-profiles" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.165196 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f784ada7-bb58-4319-afd9-fb504136a164" containerName="collect-profiles" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.165572 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f784ada7-bb58-4319-afd9-fb504136a164" containerName="collect-profiles" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.165614 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0d804e14-1e10-42fa-aa61-befd676f2556" containerName="registry-server" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.166840 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-jdswd" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.175787 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536936-jdswd"] Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.313869 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5cnh\" (UniqueName: \"kubernetes.io/projected/efc70beb-3139-4d44-b928-698fe1e86ac6-kube-api-access-f5cnh\") pod \"auto-csr-approver-29536936-jdswd\" (UID: \"efc70beb-3139-4d44-b928-698fe1e86ac6\") " pod="openshift-infra/auto-csr-approver-29536936-jdswd" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.418564 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f5cnh\" (UniqueName: \"kubernetes.io/projected/efc70beb-3139-4d44-b928-698fe1e86ac6-kube-api-access-f5cnh\") pod \"auto-csr-approver-29536936-jdswd\" (UID: \"efc70beb-3139-4d44-b928-698fe1e86ac6\") " pod="openshift-infra/auto-csr-approver-29536936-jdswd" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.439391 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f5cnh\" (UniqueName: \"kubernetes.io/projected/efc70beb-3139-4d44-b928-698fe1e86ac6-kube-api-access-f5cnh\") pod \"auto-csr-approver-29536936-jdswd\" (UID: \"efc70beb-3139-4d44-b928-698fe1e86ac6\") " pod="openshift-infra/auto-csr-approver-29536936-jdswd" Feb 27 18:16:00 crc kubenswrapper[4708]: I0227 18:16:00.498982 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-jdswd" Feb 27 18:16:01 crc kubenswrapper[4708]: I0227 18:16:01.021118 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536936-jdswd"] Feb 27 18:16:01 crc kubenswrapper[4708]: I0227 18:16:01.936741 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-jdswd" event={"ID":"efc70beb-3139-4d44-b928-698fe1e86ac6","Type":"ContainerStarted","Data":"497e2a089171761140656fe0813dec70eb96eaf57238707b296a89000809e780"} Feb 27 18:16:02 crc kubenswrapper[4708]: E0227 18:16:02.244187 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:16:02 crc kubenswrapper[4708]: E0227 18:16:02.301764 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:16:02 crc kubenswrapper[4708]: E0227 18:16:02.301991 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:16:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:16:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-f5cnh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536936-jdswd_openshift-infra(efc70beb-3139-4d44-b928-698fe1e86ac6): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:16:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:16:02 crc kubenswrapper[4708]: E0227 18:16:02.303548 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536936-jdswd" podUID="efc70beb-3139-4d44-b928-698fe1e86ac6" Feb 27 18:16:02 crc kubenswrapper[4708]: E0227 18:16:02.945115 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536936-jdswd" podUID="efc70beb-3139-4d44-b928-698fe1e86ac6" Feb 27 18:16:04 crc kubenswrapper[4708]: I0227 18:16:04.723459 4708 scope.go:117] "RemoveContainer" containerID="4c5d81f09c0a26ade0b95567c3ee3477e1cd8af2276ee8e3322621b3a20b01f4" Feb 27 18:16:05 crc kubenswrapper[4708]: I0227 18:16:05.632483 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:16:05 crc kubenswrapper[4708]: I0227 18:16:05.632899 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:16:05 crc kubenswrapper[4708]: I0227 18:16:05.632961 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:16:05 crc kubenswrapper[4708]: I0227 18:16:05.634205 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"22ded62c57513f4a94873c5f2f7942c7a83d9f03a56582087a9e2ac2ff8ceafc"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:16:05 crc kubenswrapper[4708]: I0227 18:16:05.634315 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://22ded62c57513f4a94873c5f2f7942c7a83d9f03a56582087a9e2ac2ff8ceafc" gracePeriod=600 Feb 27 18:16:06 crc kubenswrapper[4708]: I0227 18:16:06.986126 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="22ded62c57513f4a94873c5f2f7942c7a83d9f03a56582087a9e2ac2ff8ceafc" exitCode=0 Feb 27 18:16:06 crc kubenswrapper[4708]: I0227 18:16:06.986190 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"22ded62c57513f4a94873c5f2f7942c7a83d9f03a56582087a9e2ac2ff8ceafc"} Feb 27 18:16:06 crc kubenswrapper[4708]: I0227 18:16:06.986828 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785"} Feb 27 18:16:06 crc kubenswrapper[4708]: I0227 18:16:06.986901 4708 scope.go:117] "RemoveContainer" containerID="49128ac43fa2cf75ff5d6eac0315a214dc19c179f571188a6f735713f72a7e05" Feb 27 18:16:08 crc kubenswrapper[4708]: E0227 18:16:08.231432 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:16:11 crc kubenswrapper[4708]: E0227 18:16:11.237093 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:16:15 crc kubenswrapper[4708]: E0227 18:16:15.325308 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:16:15 crc kubenswrapper[4708]: E0227 18:16:15.326128 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:16:15 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:16:15 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dm64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-d9sgn_openshift-infra(fb343271-5527-4655-973b-f3a35b328fce): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:16:15 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:16:15 crc kubenswrapper[4708]: E0227 18:16:15.327323 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:16:16 crc kubenswrapper[4708]: E0227 18:16:16.230226 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:16:17 crc kubenswrapper[4708]: I0227 18:16:17.128162 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-jdswd" event={"ID":"efc70beb-3139-4d44-b928-698fe1e86ac6","Type":"ContainerStarted","Data":"79bbc71460a345d972153185428b7652ce261317efb27392cad66ddb09149863"} Feb 27 18:16:17 crc kubenswrapper[4708]: I0227 18:16:17.157715 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536936-jdswd" podStartSLOduration=1.547014214 podStartE2EDuration="17.157684883s" podCreationTimestamp="2026-02-27 18:16:00 +0000 UTC" firstStartedPulling="2026-02-27 18:16:01.026265436 +0000 UTC m=+4959.542063033" lastFinishedPulling="2026-02-27 18:16:16.636936085 +0000 UTC m=+4975.152733702" observedRunningTime="2026-02-27 18:16:17.149187104 +0000 UTC m=+4975.664984731" watchObservedRunningTime="2026-02-27 18:16:17.157684883 +0000 UTC m=+4975.673482480" Feb 27 18:16:18 crc kubenswrapper[4708]: I0227 18:16:18.141761 4708 generic.go:334] "Generic (PLEG): container finished" podID="efc70beb-3139-4d44-b928-698fe1e86ac6" containerID="79bbc71460a345d972153185428b7652ce261317efb27392cad66ddb09149863" exitCode=0 Feb 27 18:16:18 crc kubenswrapper[4708]: I0227 18:16:18.141905 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-jdswd" event={"ID":"efc70beb-3139-4d44-b928-698fe1e86ac6","Type":"ContainerDied","Data":"79bbc71460a345d972153185428b7652ce261317efb27392cad66ddb09149863"} Feb 27 18:16:19 crc kubenswrapper[4708]: E0227 18:16:19.231906 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.034195 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-jdswd" Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.163217 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536936-jdswd" event={"ID":"efc70beb-3139-4d44-b928-698fe1e86ac6","Type":"ContainerDied","Data":"497e2a089171761140656fe0813dec70eb96eaf57238707b296a89000809e780"} Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.163255 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="497e2a089171761140656fe0813dec70eb96eaf57238707b296a89000809e780" Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.163268 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536936-jdswd" Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.170569 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f5cnh\" (UniqueName: \"kubernetes.io/projected/efc70beb-3139-4d44-b928-698fe1e86ac6-kube-api-access-f5cnh\") pod \"efc70beb-3139-4d44-b928-698fe1e86ac6\" (UID: \"efc70beb-3139-4d44-b928-698fe1e86ac6\") " Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.225197 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-7znds"] Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.226279 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efc70beb-3139-4d44-b928-698fe1e86ac6-kube-api-access-f5cnh" (OuterVolumeSpecName: "kube-api-access-f5cnh") pod "efc70beb-3139-4d44-b928-698fe1e86ac6" (UID: "efc70beb-3139-4d44-b928-698fe1e86ac6"). InnerVolumeSpecName "kube-api-access-f5cnh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.238088 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536922-7znds"] Feb 27 18:16:20 crc kubenswrapper[4708]: I0227 18:16:20.272818 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f5cnh\" (UniqueName: \"kubernetes.io/projected/efc70beb-3139-4d44-b928-698fe1e86ac6-kube-api-access-f5cnh\") on node \"crc\" DevicePath \"\"" Feb 27 18:16:22 crc kubenswrapper[4708]: I0227 18:16:22.248441 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1070acf-438a-4619-9a43-e06fbad54ada" path="/var/lib/kubelet/pods/e1070acf-438a-4619-9a43-e06fbad54ada/volumes" Feb 27 18:16:24 crc kubenswrapper[4708]: E0227 18:16:24.230440 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:16:28 crc kubenswrapper[4708]: E0227 18:16:28.232234 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:16:29 crc kubenswrapper[4708]: E0227 18:16:29.231036 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:16:33 crc kubenswrapper[4708]: E0227 18:16:33.231226 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:16:38 crc kubenswrapper[4708]: E0227 18:16:38.232835 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:16:39 crc kubenswrapper[4708]: E0227 18:16:39.230371 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:16:41 crc kubenswrapper[4708]: E0227 18:16:41.229620 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:16:48 crc kubenswrapper[4708]: E0227 18:16:48.025892 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:16:48 crc kubenswrapper[4708]: E0227 18:16:48.026421 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:16:48 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:16:48 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:16:48 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:16:48 crc kubenswrapper[4708]: E0227 18:16:48.027593 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:16:52 crc kubenswrapper[4708]: E0227 18:16:52.244989 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:16:53 crc kubenswrapper[4708]: E0227 18:16:53.231010 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:16:55 crc kubenswrapper[4708]: E0227 18:16:55.230933 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:16:58 crc kubenswrapper[4708]: E0227 18:16:58.232261 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:17:04 crc kubenswrapper[4708]: I0227 18:17:04.789043 4708 scope.go:117] "RemoveContainer" containerID="44e39c4066c7ef199f64c3cc8d080c7d472bc3ad7a498dd54f0e2832054c7b86" Feb 27 18:17:06 crc kubenswrapper[4708]: E0227 18:17:06.231354 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:17:07 crc kubenswrapper[4708]: I0227 18:17:07.232153 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:17:08 crc kubenswrapper[4708]: E0227 18:17:08.231734 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:17:08 crc kubenswrapper[4708]: E0227 18:17:08.462399 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:17:08 crc kubenswrapper[4708]: E0227 18:17:08.462725 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:17:08 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:17:08 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmkn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-qjmvw_openshift-infra(b35a5adf-48a7-4e39-9491-c45f9b71b9b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:17:08 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:17:08 crc kubenswrapper[4708]: E0227 18:17:08.463949 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:17:09 crc kubenswrapper[4708]: E0227 18:17:09.231688 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:17:17 crc kubenswrapper[4708]: E0227 18:17:17.232353 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:17:19 crc kubenswrapper[4708]: E0227 18:17:19.230478 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:17:20 crc kubenswrapper[4708]: E0227 18:17:20.230750 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:17:24 crc kubenswrapper[4708]: E0227 18:17:24.236596 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:17:28 crc kubenswrapper[4708]: E0227 18:17:28.231715 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:17:31 crc kubenswrapper[4708]: E0227 18:17:31.231099 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:17:31 crc kubenswrapper[4708]: E0227 18:17:31.231799 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:17:35 crc kubenswrapper[4708]: E0227 18:17:35.233245 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:17:43 crc kubenswrapper[4708]: E0227 18:17:43.230823 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:17:43 crc kubenswrapper[4708]: E0227 18:17:43.231005 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:17:46 crc kubenswrapper[4708]: E0227 18:17:46.232650 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:17:48 crc kubenswrapper[4708]: E0227 18:17:48.230933 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.648292 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-g7g7q"] Feb 27 18:17:52 crc kubenswrapper[4708]: E0227 18:17:52.649630 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efc70beb-3139-4d44-b928-698fe1e86ac6" containerName="oc" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.649654 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="efc70beb-3139-4d44-b928-698fe1e86ac6" containerName="oc" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.650052 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="efc70beb-3139-4d44-b928-698fe1e86ac6" containerName="oc" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.652635 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.672311 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7g7q"] Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.783989 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-catalog-content\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.784322 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-utilities\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.784540 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2jnt\" (UniqueName: \"kubernetes.io/projected/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-kube-api-access-n2jnt\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.887245 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2jnt\" (UniqueName: \"kubernetes.io/projected/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-kube-api-access-n2jnt\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.887688 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-catalog-content\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.887829 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-utilities\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.888187 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-catalog-content\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.888371 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-utilities\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.911068 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2jnt\" (UniqueName: \"kubernetes.io/projected/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-kube-api-access-n2jnt\") pod \"community-operators-g7g7q\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:52 crc kubenswrapper[4708]: I0227 18:17:52.992670 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:17:53 crc kubenswrapper[4708]: W0227 18:17:53.484484 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11d1fa9e_19f0_40a1_9c60_34f11efc63f4.slice/crio-d661a2c3b15077263790ea46c9976310f0c7d8e9a24f94010612f56a79e79f97 WatchSource:0}: Error finding container d661a2c3b15077263790ea46c9976310f0c7d8e9a24f94010612f56a79e79f97: Status 404 returned error can't find the container with id d661a2c3b15077263790ea46c9976310f0c7d8e9a24f94010612f56a79e79f97 Feb 27 18:17:53 crc kubenswrapper[4708]: I0227 18:17:53.488209 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-g7g7q"] Feb 27 18:17:54 crc kubenswrapper[4708]: I0227 18:17:54.317821 4708 generic.go:334] "Generic (PLEG): container finished" podID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerID="6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120" exitCode=0 Feb 27 18:17:54 crc kubenswrapper[4708]: I0227 18:17:54.317911 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7g7q" event={"ID":"11d1fa9e-19f0-40a1-9c60-34f11efc63f4","Type":"ContainerDied","Data":"6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120"} Feb 27 18:17:54 crc kubenswrapper[4708]: I0227 18:17:54.318116 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7g7q" event={"ID":"11d1fa9e-19f0-40a1-9c60-34f11efc63f4","Type":"ContainerStarted","Data":"d661a2c3b15077263790ea46c9976310f0c7d8e9a24f94010612f56a79e79f97"} Feb 27 18:17:54 crc kubenswrapper[4708]: E0227 18:17:54.929206 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:17:54 crc kubenswrapper[4708]: E0227 18:17:54.929771 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2jnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g7g7q_openshift-marketplace(11d1fa9e-19f0-40a1-9c60-34f11efc63f4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:17:54 crc kubenswrapper[4708]: E0227 18:17:54.931066 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:17:55 crc kubenswrapper[4708]: E0227 18:17:55.230815 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:17:55 crc kubenswrapper[4708]: E0227 18:17:55.337004 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:17:57 crc kubenswrapper[4708]: E0227 18:17:57.229958 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:17:58 crc kubenswrapper[4708]: E0227 18:17:58.230793 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.155917 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536938-rpzzd"] Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.158837 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.167734 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536938-rpzzd"] Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.260047 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7r7\" (UniqueName: \"kubernetes.io/projected/c13e5f6d-7286-4ea5-bad3-84d30d472475-kube-api-access-fz7r7\") pod \"auto-csr-approver-29536938-rpzzd\" (UID: \"c13e5f6d-7286-4ea5-bad3-84d30d472475\") " pod="openshift-infra/auto-csr-approver-29536938-rpzzd" Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.362156 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7r7\" (UniqueName: \"kubernetes.io/projected/c13e5f6d-7286-4ea5-bad3-84d30d472475-kube-api-access-fz7r7\") pod \"auto-csr-approver-29536938-rpzzd\" (UID: \"c13e5f6d-7286-4ea5-bad3-84d30d472475\") " pod="openshift-infra/auto-csr-approver-29536938-rpzzd" Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.385396 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7r7\" (UniqueName: \"kubernetes.io/projected/c13e5f6d-7286-4ea5-bad3-84d30d472475-kube-api-access-fz7r7\") pod \"auto-csr-approver-29536938-rpzzd\" (UID: \"c13e5f6d-7286-4ea5-bad3-84d30d472475\") " pod="openshift-infra/auto-csr-approver-29536938-rpzzd" Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.482326 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" Feb 27 18:18:00 crc kubenswrapper[4708]: I0227 18:18:00.973637 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536938-rpzzd"] Feb 27 18:18:00 crc kubenswrapper[4708]: W0227 18:18:00.976573 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc13e5f6d_7286_4ea5_bad3_84d30d472475.slice/crio-1bfd0f43e954e91ef3e8e32c6adb3a7d31ec028da7e7e95580e8fd8fd9e11270 WatchSource:0}: Error finding container 1bfd0f43e954e91ef3e8e32c6adb3a7d31ec028da7e7e95580e8fd8fd9e11270: Status 404 returned error can't find the container with id 1bfd0f43e954e91ef3e8e32c6adb3a7d31ec028da7e7e95580e8fd8fd9e11270 Feb 27 18:18:01 crc kubenswrapper[4708]: E0227 18:18:01.230081 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:18:01 crc kubenswrapper[4708]: I0227 18:18:01.403449 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" event={"ID":"c13e5f6d-7286-4ea5-bad3-84d30d472475","Type":"ContainerStarted","Data":"1bfd0f43e954e91ef3e8e32c6adb3a7d31ec028da7e7e95580e8fd8fd9e11270"} Feb 27 18:18:02 crc kubenswrapper[4708]: E0227 18:18:02.009811 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:18:02 crc kubenswrapper[4708]: E0227 18:18:02.010074 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:18:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:18:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fz7r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536938-rpzzd_openshift-infra(c13e5f6d-7286-4ea5-bad3-84d30d472475): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:18:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:18:02 crc kubenswrapper[4708]: E0227 18:18:02.011305 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:18:02 crc kubenswrapper[4708]: E0227 18:18:02.417363 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:18:06 crc kubenswrapper[4708]: E0227 18:18:06.233936 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:18:06 crc kubenswrapper[4708]: E0227 18:18:06.793985 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:18:06 crc kubenswrapper[4708]: E0227 18:18:06.794619 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2jnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g7g7q_openshift-marketplace(11d1fa9e-19f0-40a1-9c60-34f11efc63f4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:18:06 crc kubenswrapper[4708]: E0227 18:18:06.796657 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:18:10 crc kubenswrapper[4708]: E0227 18:18:10.230733 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:18:13 crc kubenswrapper[4708]: E0227 18:18:13.231666 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.591002 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v5746"] Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.593391 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.612907 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5746"] Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.641733 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-utilities\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.642046 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dm9g\" (UniqueName: \"kubernetes.io/projected/fabec868-3d01-4bf9-b042-15f99cb49544-kube-api-access-6dm9g\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.642079 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-catalog-content\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.744023 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dm9g\" (UniqueName: \"kubernetes.io/projected/fabec868-3d01-4bf9-b042-15f99cb49544-kube-api-access-6dm9g\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.744079 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-utilities\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.744117 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-catalog-content\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.744733 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-catalog-content\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.744900 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-utilities\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.763103 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dm9g\" (UniqueName: \"kubernetes.io/projected/fabec868-3d01-4bf9-b042-15f99cb49544-kube-api-access-6dm9g\") pod \"redhat-marketplace-v5746\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:15 crc kubenswrapper[4708]: I0227 18:18:15.921447 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:18:16 crc kubenswrapper[4708]: E0227 18:18:16.229890 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:18:16 crc kubenswrapper[4708]: I0227 18:18:16.907407 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5746"] Feb 27 18:18:16 crc kubenswrapper[4708]: W0227 18:18:16.913744 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfabec868_3d01_4bf9_b042_15f99cb49544.slice/crio-d0417dd598be3788186a865f7992f4917c30b8636bf0c63b29833443d73c99e7 WatchSource:0}: Error finding container d0417dd598be3788186a865f7992f4917c30b8636bf0c63b29833443d73c99e7: Status 404 returned error can't find the container with id d0417dd598be3788186a865f7992f4917c30b8636bf0c63b29833443d73c99e7 Feb 27 18:18:17 crc kubenswrapper[4708]: I0227 18:18:17.620296 4708 generic.go:334] "Generic (PLEG): container finished" podID="fabec868-3d01-4bf9-b042-15f99cb49544" containerID="1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945" exitCode=0 Feb 27 18:18:17 crc kubenswrapper[4708]: I0227 18:18:17.620362 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5746" event={"ID":"fabec868-3d01-4bf9-b042-15f99cb49544","Type":"ContainerDied","Data":"1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945"} Feb 27 18:18:17 crc kubenswrapper[4708]: I0227 18:18:17.620401 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5746" event={"ID":"fabec868-3d01-4bf9-b042-15f99cb49544","Type":"ContainerStarted","Data":"d0417dd598be3788186a865f7992f4917c30b8636bf0c63b29833443d73c99e7"} Feb 27 18:18:18 crc kubenswrapper[4708]: E0227 18:18:18.249444 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:18:18 crc kubenswrapper[4708]: E0227 18:18:18.249631 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:18:18 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:18:18 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fz7r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536938-rpzzd_openshift-infra(c13e5f6d-7286-4ea5-bad3-84d30d472475): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:18:18 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:18:18 crc kubenswrapper[4708]: E0227 18:18:18.250706 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:18:18 crc kubenswrapper[4708]: E0227 18:18:18.327382 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:18:18 crc kubenswrapper[4708]: E0227 18:18:18.327546 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v5746_openshift-marketplace(fabec868-3d01-4bf9-b042-15f99cb49544): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:18:18 crc kubenswrapper[4708]: E0227 18:18:18.328732 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:18:18 crc kubenswrapper[4708]: E0227 18:18:18.639798 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:18:19 crc kubenswrapper[4708]: E0227 18:18:19.230530 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:18:19 crc kubenswrapper[4708]: E0227 18:18:19.230616 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:18:24 crc kubenswrapper[4708]: E0227 18:18:24.231662 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:18:25 crc kubenswrapper[4708]: E0227 18:18:25.231662 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:18:29 crc kubenswrapper[4708]: E0227 18:18:29.232177 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:18:30 crc kubenswrapper[4708]: E0227 18:18:30.230634 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:18:31 crc kubenswrapper[4708]: E0227 18:18:31.818606 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 18:18:31 crc kubenswrapper[4708]: E0227 18:18:31.819139 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2jnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-g7g7q_openshift-marketplace(11d1fa9e-19f0-40a1-9c60-34f11efc63f4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:18:31 crc kubenswrapper[4708]: E0227 18:18:31.820379 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:18:34 crc kubenswrapper[4708]: E0227 18:18:34.231899 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:18:34 crc kubenswrapper[4708]: E0227 18:18:34.940892 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:18:34 crc kubenswrapper[4708]: E0227 18:18:34.941151 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v5746_openshift-marketplace(fabec868-3d01-4bf9-b042-15f99cb49544): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:18:34 crc kubenswrapper[4708]: E0227 18:18:34.942491 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:18:35 crc kubenswrapper[4708]: I0227 18:18:35.632204 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:18:35 crc kubenswrapper[4708]: I0227 18:18:35.632506 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:18:38 crc kubenswrapper[4708]: E0227 18:18:38.232186 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:18:39 crc kubenswrapper[4708]: E0227 18:18:39.230946 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:18:42 crc kubenswrapper[4708]: E0227 18:18:42.242327 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:18:45 crc kubenswrapper[4708]: E0227 18:18:45.336775 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:18:45 crc kubenswrapper[4708]: E0227 18:18:45.337230 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:18:45 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:18:45 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-fz7r7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536938-rpzzd_openshift-infra(c13e5f6d-7286-4ea5-bad3-84d30d472475): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:18:45 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:18:45 crc kubenswrapper[4708]: E0227 18:18:45.338519 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:18:46 crc kubenswrapper[4708]: E0227 18:18:46.246405 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:18:48 crc kubenswrapper[4708]: E0227 18:18:48.237100 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:18:48 crc kubenswrapper[4708]: E0227 18:18:48.237464 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:18:52 crc kubenswrapper[4708]: E0227 18:18:52.244515 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:18:53 crc kubenswrapper[4708]: E0227 18:18:53.230454 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:18:56 crc kubenswrapper[4708]: E0227 18:18:56.231459 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:18:59 crc kubenswrapper[4708]: E0227 18:18:59.230657 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:19:00 crc kubenswrapper[4708]: E0227 18:19:00.232297 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:19:00 crc kubenswrapper[4708]: E0227 18:19:00.232985 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:19:03 crc kubenswrapper[4708]: E0227 18:19:03.969577 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:19:03 crc kubenswrapper[4708]: E0227 18:19:03.971118 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v5746_openshift-marketplace(fabec868-3d01-4bf9-b042-15f99cb49544): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:19:03 crc kubenswrapper[4708]: E0227 18:19:03.972435 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:19:05 crc kubenswrapper[4708]: I0227 18:19:05.631723 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:19:05 crc kubenswrapper[4708]: I0227 18:19:05.632022 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:19:06 crc kubenswrapper[4708]: E0227 18:19:06.231636 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:19:07 crc kubenswrapper[4708]: E0227 18:19:07.229582 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:19:11 crc kubenswrapper[4708]: E0227 18:19:11.231979 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" Feb 27 18:19:11 crc kubenswrapper[4708]: E0227 18:19:11.232502 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:19:12 crc kubenswrapper[4708]: E0227 18:19:12.241591 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:19:13 crc kubenswrapper[4708]: E0227 18:19:13.240112 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:19:16 crc kubenswrapper[4708]: E0227 18:19:16.231026 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:19:21 crc kubenswrapper[4708]: E0227 18:19:21.231239 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:19:23 crc kubenswrapper[4708]: E0227 18:19:23.116820 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:19:23 crc kubenswrapper[4708]: E0227 18:19:23.117537 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:19:23 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:19:23 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:19:23 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:19:23 crc kubenswrapper[4708]: E0227 18:19:23.119079 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:19:24 crc kubenswrapper[4708]: E0227 18:19:24.231277 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:19:24 crc kubenswrapper[4708]: E0227 18:19:24.231562 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:19:25 crc kubenswrapper[4708]: E0227 18:19:25.231111 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" Feb 27 18:19:27 crc kubenswrapper[4708]: I0227 18:19:27.421632 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7g7q" event={"ID":"11d1fa9e-19f0-40a1-9c60-34f11efc63f4","Type":"ContainerStarted","Data":"58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7"} Feb 27 18:19:28 crc kubenswrapper[4708]: I0227 18:19:28.435220 4708 generic.go:334] "Generic (PLEG): container finished" podID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerID="58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7" exitCode=0 Feb 27 18:19:28 crc kubenswrapper[4708]: I0227 18:19:28.435301 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7g7q" event={"ID":"11d1fa9e-19f0-40a1-9c60-34f11efc63f4","Type":"ContainerDied","Data":"58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7"} Feb 27 18:19:29 crc kubenswrapper[4708]: E0227 18:19:29.230126 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:19:29 crc kubenswrapper[4708]: I0227 18:19:29.447492 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7g7q" event={"ID":"11d1fa9e-19f0-40a1-9c60-34f11efc63f4","Type":"ContainerStarted","Data":"f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da"} Feb 27 18:19:29 crc kubenswrapper[4708]: I0227 18:19:29.472336 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-g7g7q" podStartSLOduration=2.961869365 podStartE2EDuration="1m37.472318008s" podCreationTimestamp="2026-02-27 18:17:52 +0000 UTC" firstStartedPulling="2026-02-27 18:17:54.320789725 +0000 UTC m=+5072.836587342" lastFinishedPulling="2026-02-27 18:19:28.831238358 +0000 UTC m=+5167.347035985" observedRunningTime="2026-02-27 18:19:29.464269252 +0000 UTC m=+5167.980066879" watchObservedRunningTime="2026-02-27 18:19:29.472318008 +0000 UTC m=+5167.988115595" Feb 27 18:19:32 crc kubenswrapper[4708]: E0227 18:19:32.269117 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:19:32 crc kubenswrapper[4708]: I0227 18:19:32.993341 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:19:32 crc kubenswrapper[4708]: I0227 18:19:32.993402 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:19:33 crc kubenswrapper[4708]: I0227 18:19:33.062265 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:19:35 crc kubenswrapper[4708]: E0227 18:19:35.230160 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:19:35 crc kubenswrapper[4708]: I0227 18:19:35.632160 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:19:35 crc kubenswrapper[4708]: I0227 18:19:35.632228 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:19:35 crc kubenswrapper[4708]: I0227 18:19:35.632283 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:19:35 crc kubenswrapper[4708]: I0227 18:19:35.633401 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:19:35 crc kubenswrapper[4708]: I0227 18:19:35.633507 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" gracePeriod=600 Feb 27 18:19:35 crc kubenswrapper[4708]: E0227 18:19:35.760033 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:19:36 crc kubenswrapper[4708]: E0227 18:19:36.229833 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.523124 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" exitCode=0 Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.523255 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785"} Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.523526 4708 scope.go:117] "RemoveContainer" containerID="22ded62c57513f4a94873c5f2f7942c7a83d9f03a56582087a9e2ac2ff8ceafc" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.524762 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:19:36 crc kubenswrapper[4708]: E0227 18:19:36.525417 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.621430 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8hnrf"] Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.625475 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.663685 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hnrf"] Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.790415 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-utilities\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.790693 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-catalog-content\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.790824 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpz9v\" (UniqueName: \"kubernetes.io/projected/16806a3d-cd29-4361-9351-9367a118f880-kube-api-access-rpz9v\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.892324 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-utilities\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.892451 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-catalog-content\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.892498 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpz9v\" (UniqueName: \"kubernetes.io/projected/16806a3d-cd29-4361-9351-9367a118f880-kube-api-access-rpz9v\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.893043 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-catalog-content\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.893327 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-utilities\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.920777 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpz9v\" (UniqueName: \"kubernetes.io/projected/16806a3d-cd29-4361-9351-9367a118f880-kube-api-access-rpz9v\") pod \"certified-operators-8hnrf\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:36 crc kubenswrapper[4708]: I0227 18:19:36.951206 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:37 crc kubenswrapper[4708]: I0227 18:19:37.492915 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hnrf"] Feb 27 18:19:38 crc kubenswrapper[4708]: I0227 18:19:38.545715 4708 generic.go:334] "Generic (PLEG): container finished" podID="16806a3d-cd29-4361-9351-9367a118f880" containerID="c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63" exitCode=0 Feb 27 18:19:38 crc kubenswrapper[4708]: I0227 18:19:38.545754 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnrf" event={"ID":"16806a3d-cd29-4361-9351-9367a118f880","Type":"ContainerDied","Data":"c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63"} Feb 27 18:19:38 crc kubenswrapper[4708]: I0227 18:19:38.545779 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnrf" event={"ID":"16806a3d-cd29-4361-9351-9367a118f880","Type":"ContainerStarted","Data":"18dff3a775edce5648052eea49b31f3099a4e0b3ca76e37469c4289a35b1d5b2"} Feb 27 18:19:39 crc kubenswrapper[4708]: E0227 18:19:39.229749 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:19:39 crc kubenswrapper[4708]: I0227 18:19:39.556170 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnrf" event={"ID":"16806a3d-cd29-4361-9351-9367a118f880","Type":"ContainerStarted","Data":"262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d"} Feb 27 18:19:40 crc kubenswrapper[4708]: E0227 18:19:40.232752 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:19:40 crc kubenswrapper[4708]: I0227 18:19:40.607823 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnrf" event={"ID":"16806a3d-cd29-4361-9351-9367a118f880","Type":"ContainerDied","Data":"262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d"} Feb 27 18:19:40 crc kubenswrapper[4708]: I0227 18:19:40.607781 4708 generic.go:334] "Generic (PLEG): container finished" podID="16806a3d-cd29-4361-9351-9367a118f880" containerID="262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d" exitCode=0 Feb 27 18:19:40 crc kubenswrapper[4708]: I0227 18:19:40.632077 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" event={"ID":"c13e5f6d-7286-4ea5-bad3-84d30d472475","Type":"ContainerStarted","Data":"8d6a3fa0ef187744a4abfea5b559fb4c1701c9663d4ff29e32318396bccca779"} Feb 27 18:19:40 crc kubenswrapper[4708]: I0227 18:19:40.669587 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" podStartSLOduration=1.591801942 podStartE2EDuration="1m40.669567474s" podCreationTimestamp="2026-02-27 18:18:00 +0000 UTC" firstStartedPulling="2026-02-27 18:18:00.980833358 +0000 UTC m=+5079.496630995" lastFinishedPulling="2026-02-27 18:19:40.05859891 +0000 UTC m=+5178.574396527" observedRunningTime="2026-02-27 18:19:40.65839629 +0000 UTC m=+5179.174193877" watchObservedRunningTime="2026-02-27 18:19:40.669567474 +0000 UTC m=+5179.185365051" Feb 27 18:19:41 crc kubenswrapper[4708]: I0227 18:19:41.648811 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnrf" event={"ID":"16806a3d-cd29-4361-9351-9367a118f880","Type":"ContainerStarted","Data":"de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079"} Feb 27 18:19:41 crc kubenswrapper[4708]: I0227 18:19:41.650589 4708 generic.go:334] "Generic (PLEG): container finished" podID="c13e5f6d-7286-4ea5-bad3-84d30d472475" containerID="8d6a3fa0ef187744a4abfea5b559fb4c1701c9663d4ff29e32318396bccca779" exitCode=0 Feb 27 18:19:41 crc kubenswrapper[4708]: I0227 18:19:41.650627 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" event={"ID":"c13e5f6d-7286-4ea5-bad3-84d30d472475","Type":"ContainerDied","Data":"8d6a3fa0ef187744a4abfea5b559fb4c1701c9663d4ff29e32318396bccca779"} Feb 27 18:19:41 crc kubenswrapper[4708]: I0227 18:19:41.688977 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8hnrf" podStartSLOduration=3.141711787 podStartE2EDuration="5.688953126s" podCreationTimestamp="2026-02-27 18:19:36 +0000 UTC" firstStartedPulling="2026-02-27 18:19:38.548407271 +0000 UTC m=+5177.064204858" lastFinishedPulling="2026-02-27 18:19:41.09564858 +0000 UTC m=+5179.611446197" observedRunningTime="2026-02-27 18:19:41.675758495 +0000 UTC m=+5180.191556082" watchObservedRunningTime="2026-02-27 18:19:41.688953126 +0000 UTC m=+5180.204750713" Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.041748 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.083583 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.240793 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fz7r7\" (UniqueName: \"kubernetes.io/projected/c13e5f6d-7286-4ea5-bad3-84d30d472475-kube-api-access-fz7r7\") pod \"c13e5f6d-7286-4ea5-bad3-84d30d472475\" (UID: \"c13e5f6d-7286-4ea5-bad3-84d30d472475\") " Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.246569 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c13e5f6d-7286-4ea5-bad3-84d30d472475-kube-api-access-fz7r7" (OuterVolumeSpecName: "kube-api-access-fz7r7") pod "c13e5f6d-7286-4ea5-bad3-84d30d472475" (UID: "c13e5f6d-7286-4ea5-bad3-84d30d472475"). InnerVolumeSpecName "kube-api-access-fz7r7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.343870 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fz7r7\" (UniqueName: \"kubernetes.io/projected/c13e5f6d-7286-4ea5-bad3-84d30d472475-kube-api-access-fz7r7\") on node \"crc\" DevicePath \"\"" Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.672385 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" event={"ID":"c13e5f6d-7286-4ea5-bad3-84d30d472475","Type":"ContainerDied","Data":"1bfd0f43e954e91ef3e8e32c6adb3a7d31ec028da7e7e95580e8fd8fd9e11270"} Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.672422 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bfd0f43e954e91ef3e8e32c6adb3a7d31ec028da7e7e95580e8fd8fd9e11270" Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.672467 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536938-rpzzd" Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.735572 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-7s2wb"] Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.743281 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536924-7s2wb"] Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.997277 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7g7q"] Feb 27 18:19:43 crc kubenswrapper[4708]: I0227 18:19:43.997553 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-g7g7q" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="registry-server" containerID="cri-o://f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da" gracePeriod=2 Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.241941 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="74986739-6955-4b40-b3e5-6bde3a3c5695" path="/var/lib/kubelet/pods/74986739-6955-4b40-b3e5-6bde3a3c5695/volumes" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.528449 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.682242 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-utilities\") pod \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.682402 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2jnt\" (UniqueName: \"kubernetes.io/projected/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-kube-api-access-n2jnt\") pod \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.682600 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-catalog-content\") pod \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\" (UID: \"11d1fa9e-19f0-40a1-9c60-34f11efc63f4\") " Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.682887 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-utilities" (OuterVolumeSpecName: "utilities") pod "11d1fa9e-19f0-40a1-9c60-34f11efc63f4" (UID: "11d1fa9e-19f0-40a1-9c60-34f11efc63f4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.683283 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.686133 4708 generic.go:334] "Generic (PLEG): container finished" podID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerID="f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da" exitCode=0 Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.686176 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7g7q" event={"ID":"11d1fa9e-19f0-40a1-9c60-34f11efc63f4","Type":"ContainerDied","Data":"f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da"} Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.686204 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-g7g7q" event={"ID":"11d1fa9e-19f0-40a1-9c60-34f11efc63f4","Type":"ContainerDied","Data":"d661a2c3b15077263790ea46c9976310f0c7d8e9a24f94010612f56a79e79f97"} Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.686215 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-g7g7q" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.686226 4708 scope.go:117] "RemoveContainer" containerID="f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.688201 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-kube-api-access-n2jnt" (OuterVolumeSpecName: "kube-api-access-n2jnt") pod "11d1fa9e-19f0-40a1-9c60-34f11efc63f4" (UID: "11d1fa9e-19f0-40a1-9c60-34f11efc63f4"). InnerVolumeSpecName "kube-api-access-n2jnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.744361 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "11d1fa9e-19f0-40a1-9c60-34f11efc63f4" (UID: "11d1fa9e-19f0-40a1-9c60-34f11efc63f4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.751047 4708 scope.go:117] "RemoveContainer" containerID="58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.775428 4708 scope.go:117] "RemoveContainer" containerID="6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.785475 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.785510 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2jnt\" (UniqueName: \"kubernetes.io/projected/11d1fa9e-19f0-40a1-9c60-34f11efc63f4-kube-api-access-n2jnt\") on node \"crc\" DevicePath \"\"" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.812607 4708 scope.go:117] "RemoveContainer" containerID="f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da" Feb 27 18:19:44 crc kubenswrapper[4708]: E0227 18:19:44.813434 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da\": container with ID starting with f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da not found: ID does not exist" containerID="f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.813501 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da"} err="failed to get container status \"f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da\": rpc error: code = NotFound desc = could not find container \"f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da\": container with ID starting with f09e54365cc18e500fe4ad8625e774a978a5c698f9bb081d11928d38d72a23da not found: ID does not exist" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.813532 4708 scope.go:117] "RemoveContainer" containerID="58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7" Feb 27 18:19:44 crc kubenswrapper[4708]: E0227 18:19:44.814024 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7\": container with ID starting with 58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7 not found: ID does not exist" containerID="58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.814057 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7"} err="failed to get container status \"58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7\": rpc error: code = NotFound desc = could not find container \"58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7\": container with ID starting with 58c9e4123938e710351aa8ce75213e0bc342bdfaaa57962eba5896d282b78ca7 not found: ID does not exist" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.814079 4708 scope.go:117] "RemoveContainer" containerID="6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120" Feb 27 18:19:44 crc kubenswrapper[4708]: E0227 18:19:44.814570 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120\": container with ID starting with 6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120 not found: ID does not exist" containerID="6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120" Feb 27 18:19:44 crc kubenswrapper[4708]: I0227 18:19:44.814620 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120"} err="failed to get container status \"6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120\": rpc error: code = NotFound desc = could not find container \"6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120\": container with ID starting with 6e59d0214df43094b84878e33fa3613e76a43139503d9eade6beca2ccf508120 not found: ID does not exist" Feb 27 18:19:45 crc kubenswrapper[4708]: I0227 18:19:45.027619 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-g7g7q"] Feb 27 18:19:45 crc kubenswrapper[4708]: I0227 18:19:45.038601 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-g7g7q"] Feb 27 18:19:45 crc kubenswrapper[4708]: E0227 18:19:45.231046 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:19:46 crc kubenswrapper[4708]: I0227 18:19:46.238579 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" path="/var/lib/kubelet/pods/11d1fa9e-19f0-40a1-9c60-34f11efc63f4/volumes" Feb 27 18:19:46 crc kubenswrapper[4708]: I0227 18:19:46.951342 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:46 crc kubenswrapper[4708]: I0227 18:19:46.951420 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:47 crc kubenswrapper[4708]: E0227 18:19:47.231298 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:19:48 crc kubenswrapper[4708]: I0227 18:19:48.024829 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8hnrf" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="registry-server" probeResult="failure" output=< Feb 27 18:19:48 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:19:48 crc kubenswrapper[4708]: > Feb 27 18:19:48 crc kubenswrapper[4708]: E0227 18:19:48.231343 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:19:49 crc kubenswrapper[4708]: I0227 18:19:49.229406 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:19:49 crc kubenswrapper[4708]: E0227 18:19:49.230314 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:19:52 crc kubenswrapper[4708]: E0227 18:19:52.244549 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:19:55 crc kubenswrapper[4708]: E0227 18:19:55.983965 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:19:55 crc kubenswrapper[4708]: E0227 18:19:55.984132 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v5746_openshift-marketplace(fabec868-3d01-4bf9-b042-15f99cb49544): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:19:55 crc kubenswrapper[4708]: E0227 18:19:55.985300 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:19:57 crc kubenswrapper[4708]: I0227 18:19:57.009788 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:57 crc kubenswrapper[4708]: I0227 18:19:57.085283 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:57 crc kubenswrapper[4708]: I0227 18:19:57.255793 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hnrf"] Feb 27 18:19:58 crc kubenswrapper[4708]: I0227 18:19:58.854618 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8hnrf" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="registry-server" containerID="cri-o://de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079" gracePeriod=2 Feb 27 18:19:59 crc kubenswrapper[4708]: E0227 18:19:59.229878 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.360081 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.432476 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpz9v\" (UniqueName: \"kubernetes.io/projected/16806a3d-cd29-4361-9351-9367a118f880-kube-api-access-rpz9v\") pod \"16806a3d-cd29-4361-9351-9367a118f880\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.432993 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-utilities\") pod \"16806a3d-cd29-4361-9351-9367a118f880\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.433090 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-catalog-content\") pod \"16806a3d-cd29-4361-9351-9367a118f880\" (UID: \"16806a3d-cd29-4361-9351-9367a118f880\") " Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.437271 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16806a3d-cd29-4361-9351-9367a118f880-kube-api-access-rpz9v" (OuterVolumeSpecName: "kube-api-access-rpz9v") pod "16806a3d-cd29-4361-9351-9367a118f880" (UID: "16806a3d-cd29-4361-9351-9367a118f880"). InnerVolumeSpecName "kube-api-access-rpz9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.440141 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-utilities" (OuterVolumeSpecName: "utilities") pod "16806a3d-cd29-4361-9351-9367a118f880" (UID: "16806a3d-cd29-4361-9351-9367a118f880"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.493168 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "16806a3d-cd29-4361-9351-9367a118f880" (UID: "16806a3d-cd29-4361-9351-9367a118f880"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.534352 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpz9v\" (UniqueName: \"kubernetes.io/projected/16806a3d-cd29-4361-9351-9367a118f880-kube-api-access-rpz9v\") on node \"crc\" DevicePath \"\"" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.534382 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.534391 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/16806a3d-cd29-4361-9351-9367a118f880-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.865538 4708 generic.go:334] "Generic (PLEG): container finished" podID="16806a3d-cd29-4361-9351-9367a118f880" containerID="de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079" exitCode=0 Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.865577 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnrf" event={"ID":"16806a3d-cd29-4361-9351-9367a118f880","Type":"ContainerDied","Data":"de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079"} Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.865590 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hnrf" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.865603 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hnrf" event={"ID":"16806a3d-cd29-4361-9351-9367a118f880","Type":"ContainerDied","Data":"18dff3a775edce5648052eea49b31f3099a4e0b3ca76e37469c4289a35b1d5b2"} Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.865620 4708 scope.go:117] "RemoveContainer" containerID="de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.893580 4708 scope.go:117] "RemoveContainer" containerID="262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.900121 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hnrf"] Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.906467 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8hnrf"] Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.921063 4708 scope.go:117] "RemoveContainer" containerID="c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.962390 4708 scope.go:117] "RemoveContainer" containerID="de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079" Feb 27 18:19:59 crc kubenswrapper[4708]: E0227 18:19:59.962876 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079\": container with ID starting with de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079 not found: ID does not exist" containerID="de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.962945 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079"} err="failed to get container status \"de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079\": rpc error: code = NotFound desc = could not find container \"de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079\": container with ID starting with de2e29c7244efed88b71fd389f1c9e7641457724e2b9e720993d9600f806d079 not found: ID does not exist" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.962994 4708 scope.go:117] "RemoveContainer" containerID="262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d" Feb 27 18:19:59 crc kubenswrapper[4708]: E0227 18:19:59.963330 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d\": container with ID starting with 262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d not found: ID does not exist" containerID="262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.963367 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d"} err="failed to get container status \"262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d\": rpc error: code = NotFound desc = could not find container \"262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d\": container with ID starting with 262235a35ece7039593d68491bbb32947db6546545039e1113b1ac2e1ce75e3d not found: ID does not exist" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.963390 4708 scope.go:117] "RemoveContainer" containerID="c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63" Feb 27 18:19:59 crc kubenswrapper[4708]: E0227 18:19:59.963647 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63\": container with ID starting with c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63 not found: ID does not exist" containerID="c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63" Feb 27 18:19:59 crc kubenswrapper[4708]: I0227 18:19:59.963682 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63"} err="failed to get container status \"c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63\": rpc error: code = NotFound desc = could not find container \"c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63\": container with ID starting with c0095a4d38a78913e85df2e6b1dd690c8f97facf2cef2b4b26fcfe47cb342f63 not found: ID does not exist" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.155116 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536940-p7fj5"] Feb 27 18:20:00 crc kubenswrapper[4708]: E0227 18:20:00.156022 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="extract-utilities" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.156087 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="extract-utilities" Feb 27 18:20:00 crc kubenswrapper[4708]: E0227 18:20:00.156154 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="extract-utilities" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.156201 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="extract-utilities" Feb 27 18:20:00 crc kubenswrapper[4708]: E0227 18:20:00.156252 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="registry-server" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.156296 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="registry-server" Feb 27 18:20:00 crc kubenswrapper[4708]: E0227 18:20:00.156349 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="extract-content" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.156398 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="extract-content" Feb 27 18:20:00 crc kubenswrapper[4708]: E0227 18:20:00.156535 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" containerName="oc" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.156584 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" containerName="oc" Feb 27 18:20:00 crc kubenswrapper[4708]: E0227 18:20:00.156636 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="extract-content" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.156681 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="extract-content" Feb 27 18:20:00 crc kubenswrapper[4708]: E0227 18:20:00.156726 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="registry-server" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.156777 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="registry-server" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.157017 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="11d1fa9e-19f0-40a1-9c60-34f11efc63f4" containerName="registry-server" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.157085 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="16806a3d-cd29-4361-9351-9367a118f880" containerName="registry-server" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.157163 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" containerName="oc" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.157942 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536940-p7fj5" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.168041 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536940-p7fj5"] Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.247369 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16806a3d-cd29-4361-9351-9367a118f880" path="/var/lib/kubelet/pods/16806a3d-cd29-4361-9351-9367a118f880/volumes" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.250139 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l4tg\" (UniqueName: \"kubernetes.io/projected/07c46d8a-05d5-44c8-86b3-a571832c34aa-kube-api-access-9l4tg\") pod \"auto-csr-approver-29536940-p7fj5\" (UID: \"07c46d8a-05d5-44c8-86b3-a571832c34aa\") " pod="openshift-infra/auto-csr-approver-29536940-p7fj5" Feb 27 18:20:00 crc kubenswrapper[4708]: I0227 18:20:00.352523 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l4tg\" (UniqueName: \"kubernetes.io/projected/07c46d8a-05d5-44c8-86b3-a571832c34aa-kube-api-access-9l4tg\") pod \"auto-csr-approver-29536940-p7fj5\" (UID: \"07c46d8a-05d5-44c8-86b3-a571832c34aa\") " pod="openshift-infra/auto-csr-approver-29536940-p7fj5" Feb 27 18:20:01 crc kubenswrapper[4708]: I0227 18:20:01.127570 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l4tg\" (UniqueName: \"kubernetes.io/projected/07c46d8a-05d5-44c8-86b3-a571832c34aa-kube-api-access-9l4tg\") pod \"auto-csr-approver-29536940-p7fj5\" (UID: \"07c46d8a-05d5-44c8-86b3-a571832c34aa\") " pod="openshift-infra/auto-csr-approver-29536940-p7fj5" Feb 27 18:20:01 crc kubenswrapper[4708]: E0227 18:20:01.209651 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:20:01 crc kubenswrapper[4708]: E0227 18:20:01.210197 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:20:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:20:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmkn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-qjmvw_openshift-infra(b35a5adf-48a7-4e39-9491-c45f9b71b9b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:20:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:20:01 crc kubenswrapper[4708]: E0227 18:20:01.212034 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:20:01 crc kubenswrapper[4708]: E0227 18:20:01.230584 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:20:01 crc kubenswrapper[4708]: I0227 18:20:01.379006 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536940-p7fj5" Feb 27 18:20:01 crc kubenswrapper[4708]: I0227 18:20:01.890222 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536940-p7fj5"] Feb 27 18:20:02 crc kubenswrapper[4708]: I0227 18:20:02.228313 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:20:02 crc kubenswrapper[4708]: E0227 18:20:02.228812 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:20:02 crc kubenswrapper[4708]: I0227 18:20:02.898675 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536940-p7fj5" event={"ID":"07c46d8a-05d5-44c8-86b3-a571832c34aa","Type":"ContainerStarted","Data":"6b064f9c1f35670204c8b061aa7f5d91aa5a52c13d72ce0f1dc37afc9d311ecf"} Feb 27 18:20:04 crc kubenswrapper[4708]: I0227 18:20:04.926767 4708 generic.go:334] "Generic (PLEG): container finished" podID="07c46d8a-05d5-44c8-86b3-a571832c34aa" containerID="15fa4c76171b9030125850145801b86e5b4e969534e613e1dc80127c9cf89800" exitCode=0 Feb 27 18:20:04 crc kubenswrapper[4708]: I0227 18:20:04.926997 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536940-p7fj5" event={"ID":"07c46d8a-05d5-44c8-86b3-a571832c34aa","Type":"ContainerDied","Data":"15fa4c76171b9030125850145801b86e5b4e969534e613e1dc80127c9cf89800"} Feb 27 18:20:04 crc kubenswrapper[4708]: I0227 18:20:04.942357 4708 scope.go:117] "RemoveContainer" containerID="a389c2e469b522445449069ce38b172556a14ba36ab1145134ea84ec2f032890" Feb 27 18:20:06 crc kubenswrapper[4708]: I0227 18:20:06.433595 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536940-p7fj5" Feb 27 18:20:06 crc kubenswrapper[4708]: I0227 18:20:06.607406 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l4tg\" (UniqueName: \"kubernetes.io/projected/07c46d8a-05d5-44c8-86b3-a571832c34aa-kube-api-access-9l4tg\") pod \"07c46d8a-05d5-44c8-86b3-a571832c34aa\" (UID: \"07c46d8a-05d5-44c8-86b3-a571832c34aa\") " Feb 27 18:20:06 crc kubenswrapper[4708]: I0227 18:20:06.614689 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c46d8a-05d5-44c8-86b3-a571832c34aa-kube-api-access-9l4tg" (OuterVolumeSpecName: "kube-api-access-9l4tg") pod "07c46d8a-05d5-44c8-86b3-a571832c34aa" (UID: "07c46d8a-05d5-44c8-86b3-a571832c34aa"). InnerVolumeSpecName "kube-api-access-9l4tg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:20:06 crc kubenswrapper[4708]: I0227 18:20:06.710703 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l4tg\" (UniqueName: \"kubernetes.io/projected/07c46d8a-05d5-44c8-86b3-a571832c34aa-kube-api-access-9l4tg\") on node \"crc\" DevicePath \"\"" Feb 27 18:20:06 crc kubenswrapper[4708]: I0227 18:20:06.977324 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536940-p7fj5" event={"ID":"07c46d8a-05d5-44c8-86b3-a571832c34aa","Type":"ContainerDied","Data":"6b064f9c1f35670204c8b061aa7f5d91aa5a52c13d72ce0f1dc37afc9d311ecf"} Feb 27 18:20:06 crc kubenswrapper[4708]: I0227 18:20:06.977392 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b064f9c1f35670204c8b061aa7f5d91aa5a52c13d72ce0f1dc37afc9d311ecf" Feb 27 18:20:06 crc kubenswrapper[4708]: I0227 18:20:06.977425 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536940-p7fj5" Feb 27 18:20:07 crc kubenswrapper[4708]: E0227 18:20:07.229885 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:20:07 crc kubenswrapper[4708]: I0227 18:20:07.502933 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536932-mq92q"] Feb 27 18:20:07 crc kubenswrapper[4708]: I0227 18:20:07.510706 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536932-mq92q"] Feb 27 18:20:08 crc kubenswrapper[4708]: I0227 18:20:08.248235 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="69db41fd-5c38-4d0a-8999-f8b595f26b06" path="/var/lib/kubelet/pods/69db41fd-5c38-4d0a-8999-f8b595f26b06/volumes" Feb 27 18:20:10 crc kubenswrapper[4708]: E0227 18:20:10.230459 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:20:11 crc kubenswrapper[4708]: E0227 18:20:11.232297 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:20:12 crc kubenswrapper[4708]: E0227 18:20:12.243762 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:20:12 crc kubenswrapper[4708]: E0227 18:20:12.243805 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:20:17 crc kubenswrapper[4708]: I0227 18:20:17.229023 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:20:17 crc kubenswrapper[4708]: E0227 18:20:17.230068 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:20:18 crc kubenswrapper[4708]: E0227 18:20:18.232328 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:20:22 crc kubenswrapper[4708]: E0227 18:20:22.245358 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:20:24 crc kubenswrapper[4708]: E0227 18:20:24.231809 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:20:24 crc kubenswrapper[4708]: E0227 18:20:24.231964 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:20:27 crc kubenswrapper[4708]: E0227 18:20:27.230716 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:20:28 crc kubenswrapper[4708]: I0227 18:20:28.229732 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:20:28 crc kubenswrapper[4708]: E0227 18:20:28.230610 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:20:32 crc kubenswrapper[4708]: E0227 18:20:32.242314 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:20:34 crc kubenswrapper[4708]: E0227 18:20:34.231388 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:20:37 crc kubenswrapper[4708]: E0227 18:20:37.231892 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:20:39 crc kubenswrapper[4708]: I0227 18:20:39.229400 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:20:39 crc kubenswrapper[4708]: E0227 18:20:39.230462 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:20:39 crc kubenswrapper[4708]: E0227 18:20:39.231100 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:20:39 crc kubenswrapper[4708]: E0227 18:20:39.231712 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:20:47 crc kubenswrapper[4708]: E0227 18:20:47.231705 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:20:48 crc kubenswrapper[4708]: E0227 18:20:48.229978 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:20:51 crc kubenswrapper[4708]: I0227 18:20:51.229075 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:20:51 crc kubenswrapper[4708]: E0227 18:20:51.230022 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:20:52 crc kubenswrapper[4708]: E0227 18:20:52.241479 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:20:53 crc kubenswrapper[4708]: E0227 18:20:53.230743 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:20:54 crc kubenswrapper[4708]: E0227 18:20:54.230138 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:21:00 crc kubenswrapper[4708]: E0227 18:21:00.233213 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:21:02 crc kubenswrapper[4708]: E0227 18:21:02.244437 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:21:05 crc kubenswrapper[4708]: I0227 18:21:05.066494 4708 scope.go:117] "RemoveContainer" containerID="73a73e92dc52984b37d2e83e1a23772a3224ac59bca23ec290e0fd574f9c5c98" Feb 27 18:21:05 crc kubenswrapper[4708]: E0227 18:21:05.229814 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:21:05 crc kubenswrapper[4708]: E0227 18:21:05.230620 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:21:06 crc kubenswrapper[4708]: I0227 18:21:06.229370 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:21:06 crc kubenswrapper[4708]: E0227 18:21:06.230062 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:21:08 crc kubenswrapper[4708]: E0227 18:21:08.232760 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:21:13 crc kubenswrapper[4708]: E0227 18:21:13.232019 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:21:16 crc kubenswrapper[4708]: E0227 18:21:16.231067 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:21:20 crc kubenswrapper[4708]: E0227 18:21:20.230390 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:21:21 crc kubenswrapper[4708]: I0227 18:21:21.229903 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:21:21 crc kubenswrapper[4708]: E0227 18:21:21.230349 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:21:22 crc kubenswrapper[4708]: E0227 18:21:22.195083 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Feb 27 18:21:22 crc kubenswrapper[4708]: E0227 18:21:22.195587 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6dm9g,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-v5746_openshift-marketplace(fabec868-3d01-4bf9-b042-15f99cb49544): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:21:22 crc kubenswrapper[4708]: E0227 18:21:22.196887 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-marketplace-index@sha256=e848a00af7690cfa41500b98e0e7a0b9738ce0af7b6b4fee3ea20e0838523c30/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:21:24 crc kubenswrapper[4708]: E0227 18:21:24.231023 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:21:29 crc kubenswrapper[4708]: E0227 18:21:29.230681 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:21:33 crc kubenswrapper[4708]: E0227 18:21:33.234256 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:21:34 crc kubenswrapper[4708]: I0227 18:21:34.230520 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:21:34 crc kubenswrapper[4708]: E0227 18:21:34.231291 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:21:35 crc kubenswrapper[4708]: E0227 18:21:35.232374 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:21:35 crc kubenswrapper[4708]: E0227 18:21:35.232428 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:21:44 crc kubenswrapper[4708]: E0227 18:21:44.231685 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:21:46 crc kubenswrapper[4708]: E0227 18:21:46.231466 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:21:47 crc kubenswrapper[4708]: E0227 18:21:47.230608 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:21:49 crc kubenswrapper[4708]: I0227 18:21:49.228422 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:21:49 crc kubenswrapper[4708]: E0227 18:21:49.228959 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:21:50 crc kubenswrapper[4708]: E0227 18:21:50.230512 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:21:55 crc kubenswrapper[4708]: E0227 18:21:55.231821 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:21:59 crc kubenswrapper[4708]: E0227 18:21:59.232768 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:21:59 crc kubenswrapper[4708]: E0227 18:21:59.521968 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing image from source docker://registry.redhat.io/openshift4/ose-cli:latest: unexpected end of JSON input" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:21:59 crc kubenswrapper[4708]: E0227 18:21:59.522163 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:21:59 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:21:59 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-l59vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536926-27vc5_openshift-infra(4169fe13-35f1-4450-b318-9b29670cdf2d): ErrImagePull: initializing image from source docker://registry.redhat.io/openshift4/ose-cli:latest: unexpected end of JSON input Feb 27 18:21:59 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:21:59 crc kubenswrapper[4708]: E0227 18:21:59.523430 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"initializing image from source docker://registry.redhat.io/openshift4/ose-cli:latest: unexpected end of JSON input\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.162406 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536942-bbblb"] Feb 27 18:22:00 crc kubenswrapper[4708]: E0227 18:22:00.163101 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c46d8a-05d5-44c8-86b3-a571832c34aa" containerName="oc" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.163128 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c46d8a-05d5-44c8-86b3-a571832c34aa" containerName="oc" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.163461 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c46d8a-05d5-44c8-86b3-a571832c34aa" containerName="oc" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.164668 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536942-bbblb" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.181576 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536942-bbblb"] Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.291236 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgpww\" (UniqueName: \"kubernetes.io/projected/6938980b-8ef4-4ded-9afe-3e2adbc609ec-kube-api-access-cgpww\") pod \"auto-csr-approver-29536942-bbblb\" (UID: \"6938980b-8ef4-4ded-9afe-3e2adbc609ec\") " pod="openshift-infra/auto-csr-approver-29536942-bbblb" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.394010 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgpww\" (UniqueName: \"kubernetes.io/projected/6938980b-8ef4-4ded-9afe-3e2adbc609ec-kube-api-access-cgpww\") pod \"auto-csr-approver-29536942-bbblb\" (UID: \"6938980b-8ef4-4ded-9afe-3e2adbc609ec\") " pod="openshift-infra/auto-csr-approver-29536942-bbblb" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.429346 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgpww\" (UniqueName: \"kubernetes.io/projected/6938980b-8ef4-4ded-9afe-3e2adbc609ec-kube-api-access-cgpww\") pod \"auto-csr-approver-29536942-bbblb\" (UID: \"6938980b-8ef4-4ded-9afe-3e2adbc609ec\") " pod="openshift-infra/auto-csr-approver-29536942-bbblb" Feb 27 18:22:00 crc kubenswrapper[4708]: I0227 18:22:00.494513 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536942-bbblb" Feb 27 18:22:01 crc kubenswrapper[4708]: W0227 18:22:01.033674 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6938980b_8ef4_4ded_9afe_3e2adbc609ec.slice/crio-92292660e3d02a0c84d1bd76b416c27080658af37b080339e56e6e179d519039 WatchSource:0}: Error finding container 92292660e3d02a0c84d1bd76b416c27080658af37b080339e56e6e179d519039: Status 404 returned error can't find the container with id 92292660e3d02a0c84d1bd76b416c27080658af37b080339e56e6e179d519039 Feb 27 18:22:01 crc kubenswrapper[4708]: I0227 18:22:01.041221 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536942-bbblb"] Feb 27 18:22:01 crc kubenswrapper[4708]: I0227 18:22:01.228904 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:22:01 crc kubenswrapper[4708]: E0227 18:22:01.229223 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:22:01 crc kubenswrapper[4708]: I0227 18:22:01.616133 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536942-bbblb" event={"ID":"6938980b-8ef4-4ded-9afe-3e2adbc609ec","Type":"ContainerStarted","Data":"92292660e3d02a0c84d1bd76b416c27080658af37b080339e56e6e179d519039"} Feb 27 18:22:02 crc kubenswrapper[4708]: I0227 18:22:02.627203 4708 generic.go:334] "Generic (PLEG): container finished" podID="6938980b-8ef4-4ded-9afe-3e2adbc609ec" containerID="11dfb7da0010c92d6ee38200e71fe986232df57e849d6b014b3dbeefd49d6725" exitCode=0 Feb 27 18:22:02 crc kubenswrapper[4708]: I0227 18:22:02.627391 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536942-bbblb" event={"ID":"6938980b-8ef4-4ded-9afe-3e2adbc609ec","Type":"ContainerDied","Data":"11dfb7da0010c92d6ee38200e71fe986232df57e849d6b014b3dbeefd49d6725"} Feb 27 18:22:03 crc kubenswrapper[4708]: E0227 18:22:03.231450 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:22:04 crc kubenswrapper[4708]: I0227 18:22:04.079308 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536942-bbblb" Feb 27 18:22:04 crc kubenswrapper[4708]: I0227 18:22:04.191235 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgpww\" (UniqueName: \"kubernetes.io/projected/6938980b-8ef4-4ded-9afe-3e2adbc609ec-kube-api-access-cgpww\") pod \"6938980b-8ef4-4ded-9afe-3e2adbc609ec\" (UID: \"6938980b-8ef4-4ded-9afe-3e2adbc609ec\") " Feb 27 18:22:04 crc kubenswrapper[4708]: I0227 18:22:04.205099 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6938980b-8ef4-4ded-9afe-3e2adbc609ec-kube-api-access-cgpww" (OuterVolumeSpecName: "kube-api-access-cgpww") pod "6938980b-8ef4-4ded-9afe-3e2adbc609ec" (UID: "6938980b-8ef4-4ded-9afe-3e2adbc609ec"). InnerVolumeSpecName "kube-api-access-cgpww". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:22:04 crc kubenswrapper[4708]: I0227 18:22:04.294257 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgpww\" (UniqueName: \"kubernetes.io/projected/6938980b-8ef4-4ded-9afe-3e2adbc609ec-kube-api-access-cgpww\") on node \"crc\" DevicePath \"\"" Feb 27 18:22:04 crc kubenswrapper[4708]: E0227 18:22:04.486631 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6938980b_8ef4_4ded_9afe_3e2adbc609ec.slice\": RecentStats: unable to find data in memory cache]" Feb 27 18:22:04 crc kubenswrapper[4708]: I0227 18:22:04.656560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536942-bbblb" event={"ID":"6938980b-8ef4-4ded-9afe-3e2adbc609ec","Type":"ContainerDied","Data":"92292660e3d02a0c84d1bd76b416c27080658af37b080339e56e6e179d519039"} Feb 27 18:22:04 crc kubenswrapper[4708]: I0227 18:22:04.656608 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="92292660e3d02a0c84d1bd76b416c27080658af37b080339e56e6e179d519039" Feb 27 18:22:04 crc kubenswrapper[4708]: I0227 18:22:04.656685 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536942-bbblb" Feb 27 18:22:05 crc kubenswrapper[4708]: I0227 18:22:05.179147 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536936-jdswd"] Feb 27 18:22:05 crc kubenswrapper[4708]: I0227 18:22:05.196906 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536936-jdswd"] Feb 27 18:22:06 crc kubenswrapper[4708]: I0227 18:22:06.249247 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efc70beb-3139-4d44-b928-698fe1e86ac6" path="/var/lib/kubelet/pods/efc70beb-3139-4d44-b928-698fe1e86ac6/volumes" Feb 27 18:22:07 crc kubenswrapper[4708]: E0227 18:22:07.232885 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:22:10 crc kubenswrapper[4708]: E0227 18:22:10.232748 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:22:12 crc kubenswrapper[4708]: I0227 18:22:12.242456 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:22:12 crc kubenswrapper[4708]: E0227 18:22:12.243375 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:22:14 crc kubenswrapper[4708]: E0227 18:22:14.232687 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:22:18 crc kubenswrapper[4708]: E0227 18:22:18.233202 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:22:19 crc kubenswrapper[4708]: E0227 18:22:19.026813 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:22:19 crc kubenswrapper[4708]: E0227 18:22:19.027067 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:22:19 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:22:19 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-6dm64,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536930-d9sgn_openshift-infra(fb343271-5527-4655-973b-f3a35b328fce): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:22:19 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:22:19 crc kubenswrapper[4708]: E0227 18:22:19.028715 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:22:22 crc kubenswrapper[4708]: E0227 18:22:22.245925 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:22:22 crc kubenswrapper[4708]: E0227 18:22:22.246470 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:22:25 crc kubenswrapper[4708]: E0227 18:22:25.232477 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:22:27 crc kubenswrapper[4708]: I0227 18:22:27.228444 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:22:27 crc kubenswrapper[4708]: E0227 18:22:27.229306 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:22:32 crc kubenswrapper[4708]: E0227 18:22:32.245726 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:22:33 crc kubenswrapper[4708]: E0227 18:22:33.231026 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:22:34 crc kubenswrapper[4708]: E0227 18:22:34.234735 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:22:37 crc kubenswrapper[4708]: E0227 18:22:37.231616 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:22:37 crc kubenswrapper[4708]: E0227 18:22:37.231636 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:22:41 crc kubenswrapper[4708]: I0227 18:22:41.229181 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:22:41 crc kubenswrapper[4708]: E0227 18:22:41.231365 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:22:46 crc kubenswrapper[4708]: E0227 18:22:46.232876 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:22:47 crc kubenswrapper[4708]: E0227 18:22:47.231936 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:22:48 crc kubenswrapper[4708]: E0227 18:22:48.229690 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:22:51 crc kubenswrapper[4708]: E0227 18:22:51.233475 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:22:51 crc kubenswrapper[4708]: E0227 18:22:51.233549 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:22:56 crc kubenswrapper[4708]: I0227 18:22:56.229051 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:22:56 crc kubenswrapper[4708]: E0227 18:22:56.230297 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:23:00 crc kubenswrapper[4708]: E0227 18:23:00.232731 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:23:00 crc kubenswrapper[4708]: E0227 18:23:00.233009 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:23:02 crc kubenswrapper[4708]: E0227 18:23:02.246429 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:23:03 crc kubenswrapper[4708]: E0227 18:23:03.229807 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:23:05 crc kubenswrapper[4708]: I0227 18:23:05.204015 4708 scope.go:117] "RemoveContainer" containerID="79bbc71460a345d972153185428b7652ce261317efb27392cad66ddb09149863" Feb 27 18:23:05 crc kubenswrapper[4708]: E0227 18:23:05.251683 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:23:07 crc kubenswrapper[4708]: I0227 18:23:07.229491 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:23:07 crc kubenswrapper[4708]: E0227 18:23:07.229751 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:23:11 crc kubenswrapper[4708]: E0227 18:23:11.232595 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:23:13 crc kubenswrapper[4708]: E0227 18:23:13.231163 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:23:13 crc kubenswrapper[4708]: E0227 18:23:13.231196 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:23:14 crc kubenswrapper[4708]: E0227 18:23:14.230546 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:23:19 crc kubenswrapper[4708]: E0227 18:23:19.231054 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:23:22 crc kubenswrapper[4708]: I0227 18:23:22.244701 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:23:22 crc kubenswrapper[4708]: E0227 18:23:22.245884 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:23:26 crc kubenswrapper[4708]: E0227 18:23:26.230762 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:23:27 crc kubenswrapper[4708]: E0227 18:23:27.229741 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:23:27 crc kubenswrapper[4708]: E0227 18:23:27.230176 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:23:28 crc kubenswrapper[4708]: E0227 18:23:28.230574 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:23:33 crc kubenswrapper[4708]: E0227 18:23:33.232874 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:23:34 crc kubenswrapper[4708]: I0227 18:23:34.228798 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:23:34 crc kubenswrapper[4708]: E0227 18:23:34.229503 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:23:38 crc kubenswrapper[4708]: E0227 18:23:38.232998 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:23:39 crc kubenswrapper[4708]: E0227 18:23:39.231075 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:23:39 crc kubenswrapper[4708]: E0227 18:23:39.231563 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:23:40 crc kubenswrapper[4708]: E0227 18:23:40.231942 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:23:46 crc kubenswrapper[4708]: E0227 18:23:46.232930 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:23:48 crc kubenswrapper[4708]: I0227 18:23:48.229339 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:23:48 crc kubenswrapper[4708]: E0227 18:23:48.230297 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:23:51 crc kubenswrapper[4708]: E0227 18:23:51.231822 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:23:51 crc kubenswrapper[4708]: E0227 18:23:51.231823 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" Feb 27 18:23:53 crc kubenswrapper[4708]: E0227 18:23:53.232196 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:23:55 crc kubenswrapper[4708]: E0227 18:23:55.230830 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.183259 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536944-mc9xt"] Feb 27 18:24:00 crc kubenswrapper[4708]: E0227 18:24:00.184912 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6938980b-8ef4-4ded-9afe-3e2adbc609ec" containerName="oc" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.184935 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6938980b-8ef4-4ded-9afe-3e2adbc609ec" containerName="oc" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.185371 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="6938980b-8ef4-4ded-9afe-3e2adbc609ec" containerName="oc" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.186725 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536944-mc9xt" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.196448 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536944-mc9xt"] Feb 27 18:24:00 crc kubenswrapper[4708]: E0227 18:24:00.231143 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.325839 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9v7bn\" (UniqueName: \"kubernetes.io/projected/8a060715-2648-4f1c-ab55-1633203a02c2-kube-api-access-9v7bn\") pod \"auto-csr-approver-29536944-mc9xt\" (UID: \"8a060715-2648-4f1c-ab55-1633203a02c2\") " pod="openshift-infra/auto-csr-approver-29536944-mc9xt" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.428736 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9v7bn\" (UniqueName: \"kubernetes.io/projected/8a060715-2648-4f1c-ab55-1633203a02c2-kube-api-access-9v7bn\") pod \"auto-csr-approver-29536944-mc9xt\" (UID: \"8a060715-2648-4f1c-ab55-1633203a02c2\") " pod="openshift-infra/auto-csr-approver-29536944-mc9xt" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.457296 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9v7bn\" (UniqueName: \"kubernetes.io/projected/8a060715-2648-4f1c-ab55-1633203a02c2-kube-api-access-9v7bn\") pod \"auto-csr-approver-29536944-mc9xt\" (UID: \"8a060715-2648-4f1c-ab55-1633203a02c2\") " pod="openshift-infra/auto-csr-approver-29536944-mc9xt" Feb 27 18:24:00 crc kubenswrapper[4708]: I0227 18:24:00.512707 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536944-mc9xt" Feb 27 18:24:01 crc kubenswrapper[4708]: W0227 18:24:01.064473 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8a060715_2648_4f1c_ab55_1633203a02c2.slice/crio-a7cebaf0970e52275e5ca7420b2a5499fca23ac9c87c7c4c37cdaba028a4974c WatchSource:0}: Error finding container a7cebaf0970e52275e5ca7420b2a5499fca23ac9c87c7c4c37cdaba028a4974c: Status 404 returned error can't find the container with id a7cebaf0970e52275e5ca7420b2a5499fca23ac9c87c7c4c37cdaba028a4974c Feb 27 18:24:01 crc kubenswrapper[4708]: I0227 18:24:01.065255 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536944-mc9xt"] Feb 27 18:24:01 crc kubenswrapper[4708]: I0227 18:24:01.067298 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:24:01 crc kubenswrapper[4708]: I0227 18:24:01.155422 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536944-mc9xt" event={"ID":"8a060715-2648-4f1c-ab55-1633203a02c2","Type":"ContainerStarted","Data":"a7cebaf0970e52275e5ca7420b2a5499fca23ac9c87c7c4c37cdaba028a4974c"} Feb 27 18:24:03 crc kubenswrapper[4708]: I0227 18:24:03.184280 4708 generic.go:334] "Generic (PLEG): container finished" podID="8a060715-2648-4f1c-ab55-1633203a02c2" containerID="5db0772007d50353bc7c4bf4e1949764322c23eade2a997d7df25912d81b26b3" exitCode=0 Feb 27 18:24:03 crc kubenswrapper[4708]: I0227 18:24:03.184396 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536944-mc9xt" event={"ID":"8a060715-2648-4f1c-ab55-1633203a02c2","Type":"ContainerDied","Data":"5db0772007d50353bc7c4bf4e1949764322c23eade2a997d7df25912d81b26b3"} Feb 27 18:24:03 crc kubenswrapper[4708]: I0227 18:24:03.230514 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:24:03 crc kubenswrapper[4708]: E0227 18:24:03.230918 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:24:04 crc kubenswrapper[4708]: I0227 18:24:04.637584 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536944-mc9xt" Feb 27 18:24:04 crc kubenswrapper[4708]: I0227 18:24:04.738170 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9v7bn\" (UniqueName: \"kubernetes.io/projected/8a060715-2648-4f1c-ab55-1633203a02c2-kube-api-access-9v7bn\") pod \"8a060715-2648-4f1c-ab55-1633203a02c2\" (UID: \"8a060715-2648-4f1c-ab55-1633203a02c2\") " Feb 27 18:24:04 crc kubenswrapper[4708]: I0227 18:24:04.747280 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a060715-2648-4f1c-ab55-1633203a02c2-kube-api-access-9v7bn" (OuterVolumeSpecName: "kube-api-access-9v7bn") pod "8a060715-2648-4f1c-ab55-1633203a02c2" (UID: "8a060715-2648-4f1c-ab55-1633203a02c2"). InnerVolumeSpecName "kube-api-access-9v7bn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:24:04 crc kubenswrapper[4708]: I0227 18:24:04.841456 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9v7bn\" (UniqueName: \"kubernetes.io/projected/8a060715-2648-4f1c-ab55-1633203a02c2-kube-api-access-9v7bn\") on node \"crc\" DevicePath \"\"" Feb 27 18:24:05 crc kubenswrapper[4708]: I0227 18:24:05.204713 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536944-mc9xt" event={"ID":"8a060715-2648-4f1c-ab55-1633203a02c2","Type":"ContainerDied","Data":"a7cebaf0970e52275e5ca7420b2a5499fca23ac9c87c7c4c37cdaba028a4974c"} Feb 27 18:24:05 crc kubenswrapper[4708]: I0227 18:24:05.204752 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7cebaf0970e52275e5ca7420b2a5499fca23ac9c87c7c4c37cdaba028a4974c" Feb 27 18:24:05 crc kubenswrapper[4708]: I0227 18:24:05.204777 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536944-mc9xt" Feb 27 18:24:05 crc kubenswrapper[4708]: E0227 18:24:05.230179 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:24:05 crc kubenswrapper[4708]: I0227 18:24:05.720794 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536938-rpzzd"] Feb 27 18:24:05 crc kubenswrapper[4708]: I0227 18:24:05.730681 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536938-rpzzd"] Feb 27 18:24:06 crc kubenswrapper[4708]: I0227 18:24:06.238463 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c13e5f6d-7286-4ea5-bad3-84d30d472475" path="/var/lib/kubelet/pods/c13e5f6d-7286-4ea5-bad3-84d30d472475/volumes" Feb 27 18:24:07 crc kubenswrapper[4708]: I0227 18:24:07.232483 4708 generic.go:334] "Generic (PLEG): container finished" podID="fabec868-3d01-4bf9-b042-15f99cb49544" containerID="0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3" exitCode=0 Feb 27 18:24:07 crc kubenswrapper[4708]: I0227 18:24:07.232570 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5746" event={"ID":"fabec868-3d01-4bf9-b042-15f99cb49544","Type":"ContainerDied","Data":"0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3"} Feb 27 18:24:08 crc kubenswrapper[4708]: E0227 18:24:08.230399 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:24:08 crc kubenswrapper[4708]: E0227 18:24:08.230428 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:24:08 crc kubenswrapper[4708]: I0227 18:24:08.250558 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5746" event={"ID":"fabec868-3d01-4bf9-b042-15f99cb49544","Type":"ContainerStarted","Data":"3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7"} Feb 27 18:24:08 crc kubenswrapper[4708]: I0227 18:24:08.273590 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v5746" podStartSLOduration=3.111032467 podStartE2EDuration="5m53.27356903s" podCreationTimestamp="2026-02-27 18:18:15 +0000 UTC" firstStartedPulling="2026-02-27 18:18:17.625092488 +0000 UTC m=+5096.140890115" lastFinishedPulling="2026-02-27 18:24:07.787629051 +0000 UTC m=+5446.303426678" observedRunningTime="2026-02-27 18:24:08.272387877 +0000 UTC m=+5446.788185504" watchObservedRunningTime="2026-02-27 18:24:08.27356903 +0000 UTC m=+5446.789366627" Feb 27 18:24:14 crc kubenswrapper[4708]: E0227 18:24:14.232498 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:24:15 crc kubenswrapper[4708]: I0227 18:24:15.922297 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:24:15 crc kubenswrapper[4708]: I0227 18:24:15.922764 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:24:16 crc kubenswrapper[4708]: I0227 18:24:16.590003 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:24:16 crc kubenswrapper[4708]: I0227 18:24:16.675099 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:24:16 crc kubenswrapper[4708]: I0227 18:24:16.834715 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5746"] Feb 27 18:24:17 crc kubenswrapper[4708]: I0227 18:24:17.230930 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:24:17 crc kubenswrapper[4708]: E0227 18:24:17.231531 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:24:17 crc kubenswrapper[4708]: E0227 18:24:17.233123 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.356969 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v5746" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="registry-server" containerID="cri-o://3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7" gracePeriod=2 Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.911993 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.965784 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-catalog-content\") pod \"fabec868-3d01-4bf9-b042-15f99cb49544\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.966047 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dm9g\" (UniqueName: \"kubernetes.io/projected/fabec868-3d01-4bf9-b042-15f99cb49544-kube-api-access-6dm9g\") pod \"fabec868-3d01-4bf9-b042-15f99cb49544\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.966174 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-utilities\") pod \"fabec868-3d01-4bf9-b042-15f99cb49544\" (UID: \"fabec868-3d01-4bf9-b042-15f99cb49544\") " Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.967115 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-utilities" (OuterVolumeSpecName: "utilities") pod "fabec868-3d01-4bf9-b042-15f99cb49544" (UID: "fabec868-3d01-4bf9-b042-15f99cb49544"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.974758 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabec868-3d01-4bf9-b042-15f99cb49544-kube-api-access-6dm9g" (OuterVolumeSpecName: "kube-api-access-6dm9g") pod "fabec868-3d01-4bf9-b042-15f99cb49544" (UID: "fabec868-3d01-4bf9-b042-15f99cb49544"). InnerVolumeSpecName "kube-api-access-6dm9g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:24:18 crc kubenswrapper[4708]: I0227 18:24:18.993794 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fabec868-3d01-4bf9-b042-15f99cb49544" (UID: "fabec868-3d01-4bf9-b042-15f99cb49544"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.069684 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dm9g\" (UniqueName: \"kubernetes.io/projected/fabec868-3d01-4bf9-b042-15f99cb49544-kube-api-access-6dm9g\") on node \"crc\" DevicePath \"\"" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.069719 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.069728 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fabec868-3d01-4bf9-b042-15f99cb49544-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.369274 4708 generic.go:334] "Generic (PLEG): container finished" podID="fabec868-3d01-4bf9-b042-15f99cb49544" containerID="3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7" exitCode=0 Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.369329 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5746" event={"ID":"fabec868-3d01-4bf9-b042-15f99cb49544","Type":"ContainerDied","Data":"3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7"} Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.369334 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v5746" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.369362 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v5746" event={"ID":"fabec868-3d01-4bf9-b042-15f99cb49544","Type":"ContainerDied","Data":"d0417dd598be3788186a865f7992f4917c30b8636bf0c63b29833443d73c99e7"} Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.369409 4708 scope.go:117] "RemoveContainer" containerID="3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.410969 4708 scope.go:117] "RemoveContainer" containerID="0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.420049 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5746"] Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.435340 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v5746"] Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.435483 4708 scope.go:117] "RemoveContainer" containerID="1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.501486 4708 scope.go:117] "RemoveContainer" containerID="3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7" Feb 27 18:24:19 crc kubenswrapper[4708]: E0227 18:24:19.502538 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7\": container with ID starting with 3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7 not found: ID does not exist" containerID="3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.502635 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7"} err="failed to get container status \"3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7\": rpc error: code = NotFound desc = could not find container \"3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7\": container with ID starting with 3333dac57694487cfd3674230d171bcb02aca62ed3d7924022dabc67a7d99bf7 not found: ID does not exist" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.502701 4708 scope.go:117] "RemoveContainer" containerID="0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3" Feb 27 18:24:19 crc kubenswrapper[4708]: E0227 18:24:19.503957 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3\": container with ID starting with 0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3 not found: ID does not exist" containerID="0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.504016 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3"} err="failed to get container status \"0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3\": rpc error: code = NotFound desc = could not find container \"0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3\": container with ID starting with 0468d804d67a6b3bf7992715beaf066e860399f3b21c5eef1e624089af09e1e3 not found: ID does not exist" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.504059 4708 scope.go:117] "RemoveContainer" containerID="1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945" Feb 27 18:24:19 crc kubenswrapper[4708]: E0227 18:24:19.504423 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945\": container with ID starting with 1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945 not found: ID does not exist" containerID="1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945" Feb 27 18:24:19 crc kubenswrapper[4708]: I0227 18:24:19.504495 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945"} err="failed to get container status \"1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945\": rpc error: code = NotFound desc = could not find container \"1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945\": container with ID starting with 1f148645640ea164e894d3f99e2b84a722c774f3305021c4f6530a49ac31d945 not found: ID does not exist" Feb 27 18:24:20 crc kubenswrapper[4708]: E0227 18:24:20.230724 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:24:20 crc kubenswrapper[4708]: I0227 18:24:20.242456 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" path="/var/lib/kubelet/pods/fabec868-3d01-4bf9-b042-15f99cb49544/volumes" Feb 27 18:24:21 crc kubenswrapper[4708]: E0227 18:24:21.230589 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:24:26 crc kubenswrapper[4708]: E0227 18:24:26.244199 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:24:26 crc kubenswrapper[4708]: E0227 18:24:26.245299 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:24:26 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:24:26 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:24:26 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:24:26 crc kubenswrapper[4708]: E0227 18:24:26.247256 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:24:28 crc kubenswrapper[4708]: I0227 18:24:28.230512 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:24:28 crc kubenswrapper[4708]: E0227 18:24:28.232166 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:24:28 crc kubenswrapper[4708]: E0227 18:24:28.233306 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:24:33 crc kubenswrapper[4708]: E0227 18:24:33.230537 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:24:36 crc kubenswrapper[4708]: E0227 18:24:36.232585 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:24:38 crc kubenswrapper[4708]: E0227 18:24:38.231096 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.399771 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-96r6s"] Feb 27 18:24:38 crc kubenswrapper[4708]: E0227 18:24:38.400380 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="registry-server" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.400447 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="registry-server" Feb 27 18:24:38 crc kubenswrapper[4708]: E0227 18:24:38.400528 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a060715-2648-4f1c-ab55-1633203a02c2" containerName="oc" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.400579 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a060715-2648-4f1c-ab55-1633203a02c2" containerName="oc" Feb 27 18:24:38 crc kubenswrapper[4708]: E0227 18:24:38.400660 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="extract-content" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.400713 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="extract-content" Feb 27 18:24:38 crc kubenswrapper[4708]: E0227 18:24:38.400768 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="extract-utilities" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.400816 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="extract-utilities" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.401111 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabec868-3d01-4bf9-b042-15f99cb49544" containerName="registry-server" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.401191 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a060715-2648-4f1c-ab55-1633203a02c2" containerName="oc" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.402714 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.414445 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96r6s"] Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.476114 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-catalog-content\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.476228 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-utilities\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.476347 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btfzs\" (UniqueName: \"kubernetes.io/projected/63551e59-36d8-4ed5-a0c3-f425d10a51bf-kube-api-access-btfzs\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.578077 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-utilities\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.578574 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btfzs\" (UniqueName: \"kubernetes.io/projected/63551e59-36d8-4ed5-a0c3-f425d10a51bf-kube-api-access-btfzs\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.578628 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-catalog-content\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.578687 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-utilities\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.579069 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-catalog-content\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.604827 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btfzs\" (UniqueName: \"kubernetes.io/projected/63551e59-36d8-4ed5-a0c3-f425d10a51bf-kube-api-access-btfzs\") pod \"redhat-operators-96r6s\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:38 crc kubenswrapper[4708]: I0227 18:24:38.725078 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:24:39 crc kubenswrapper[4708]: I0227 18:24:39.221091 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-96r6s"] Feb 27 18:24:39 crc kubenswrapper[4708]: E0227 18:24:39.237515 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:24:39 crc kubenswrapper[4708]: I0227 18:24:39.607179 4708 generic.go:334] "Generic (PLEG): container finished" podID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerID="2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055" exitCode=0 Feb 27 18:24:39 crc kubenswrapper[4708]: I0227 18:24:39.607234 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96r6s" event={"ID":"63551e59-36d8-4ed5-a0c3-f425d10a51bf","Type":"ContainerDied","Data":"2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055"} Feb 27 18:24:39 crc kubenswrapper[4708]: I0227 18:24:39.607284 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96r6s" event={"ID":"63551e59-36d8-4ed5-a0c3-f425d10a51bf","Type":"ContainerStarted","Data":"64942dca26185b9e8695236ffabf607ed146a4aea895595311552a23a0db12e5"} Feb 27 18:24:40 crc kubenswrapper[4708]: E0227 18:24:40.261365 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:24:40 crc kubenswrapper[4708]: E0227 18:24:40.261888 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btfzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-96r6s_openshift-marketplace(63551e59-36d8-4ed5-a0c3-f425d10a51bf): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:24:40 crc kubenswrapper[4708]: E0227 18:24:40.263076 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:24:40 crc kubenswrapper[4708]: E0227 18:24:40.620309 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:24:41 crc kubenswrapper[4708]: I0227 18:24:41.229472 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:24:41 crc kubenswrapper[4708]: I0227 18:24:41.631816 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"bc88558550d87eae3a512b21fd1a12e6ff0ab0f0676c9f1b1877d03be6f078fe"} Feb 27 18:24:47 crc kubenswrapper[4708]: E0227 18:24:47.232367 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:24:48 crc kubenswrapper[4708]: E0227 18:24:48.230191 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:24:50 crc kubenswrapper[4708]: E0227 18:24:50.231701 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:24:51 crc kubenswrapper[4708]: E0227 18:24:51.233648 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:24:53 crc kubenswrapper[4708]: E0227 18:24:53.856017 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:24:53 crc kubenswrapper[4708]: E0227 18:24:53.856954 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btfzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-96r6s_openshift-marketplace(63551e59-36d8-4ed5-a0c3-f425d10a51bf): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:24:53 crc kubenswrapper[4708]: E0227 18:24:53.858207 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:25:01 crc kubenswrapper[4708]: E0227 18:25:01.232993 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:25:02 crc kubenswrapper[4708]: E0227 18:25:02.235633 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:25:03 crc kubenswrapper[4708]: E0227 18:25:03.231752 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:25:06 crc kubenswrapper[4708]: E0227 18:25:06.310110 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:25:06 crc kubenswrapper[4708]: E0227 18:25:06.310569 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:25:06 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:25:06 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xmkn7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536934-qjmvw_openshift-infra(b35a5adf-48a7-4e39-9491-c45f9b71b9b7): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:25:06 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:25:06 crc kubenswrapper[4708]: E0227 18:25:06.311900 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:25:08 crc kubenswrapper[4708]: E0227 18:25:08.232692 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:25:13 crc kubenswrapper[4708]: E0227 18:25:13.232669 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:25:16 crc kubenswrapper[4708]: E0227 18:25:16.231164 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:25:16 crc kubenswrapper[4708]: E0227 18:25:16.231173 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:25:17 crc kubenswrapper[4708]: E0227 18:25:17.230366 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:25:23 crc kubenswrapper[4708]: E0227 18:25:23.954272 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:25:23 crc kubenswrapper[4708]: E0227 18:25:23.955069 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-btfzs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-96r6s_openshift-marketplace(63551e59-36d8-4ed5-a0c3-f425d10a51bf): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:25:23 crc kubenswrapper[4708]: E0227 18:25:23.956539 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:25:25 crc kubenswrapper[4708]: E0227 18:25:25.231266 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:25:27 crc kubenswrapper[4708]: E0227 18:25:27.231714 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:25:30 crc kubenswrapper[4708]: E0227 18:25:30.230314 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:25:31 crc kubenswrapper[4708]: E0227 18:25:31.231300 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:25:35 crc kubenswrapper[4708]: E0227 18:25:35.233431 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:25:39 crc kubenswrapper[4708]: E0227 18:25:39.233209 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:25:40 crc kubenswrapper[4708]: E0227 18:25:40.230891 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:25:43 crc kubenswrapper[4708]: E0227 18:25:43.231461 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:25:44 crc kubenswrapper[4708]: E0227 18:25:44.231158 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:25:48 crc kubenswrapper[4708]: E0227 18:25:48.233224 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:25:50 crc kubenswrapper[4708]: E0227 18:25:50.234541 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:25:54 crc kubenswrapper[4708]: E0227 18:25:54.232803 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:25:57 crc kubenswrapper[4708]: E0227 18:25:57.231528 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:25:59 crc kubenswrapper[4708]: E0227 18:25:59.231019 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.178092 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536946-t9np9"] Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.179548 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536946-t9np9" Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.202897 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536946-t9np9"] Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.254158 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkjmf\" (UniqueName: \"kubernetes.io/projected/f53f4f25-29d2-43e2-b655-7389d6656a4e-kube-api-access-qkjmf\") pod \"auto-csr-approver-29536946-t9np9\" (UID: \"f53f4f25-29d2-43e2-b655-7389d6656a4e\") " pod="openshift-infra/auto-csr-approver-29536946-t9np9" Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.356443 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkjmf\" (UniqueName: \"kubernetes.io/projected/f53f4f25-29d2-43e2-b655-7389d6656a4e-kube-api-access-qkjmf\") pod \"auto-csr-approver-29536946-t9np9\" (UID: \"f53f4f25-29d2-43e2-b655-7389d6656a4e\") " pod="openshift-infra/auto-csr-approver-29536946-t9np9" Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.377409 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkjmf\" (UniqueName: \"kubernetes.io/projected/f53f4f25-29d2-43e2-b655-7389d6656a4e-kube-api-access-qkjmf\") pod \"auto-csr-approver-29536946-t9np9\" (UID: \"f53f4f25-29d2-43e2-b655-7389d6656a4e\") " pod="openshift-infra/auto-csr-approver-29536946-t9np9" Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.498094 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536946-t9np9" Feb 27 18:26:00 crc kubenswrapper[4708]: I0227 18:26:00.996521 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536946-t9np9"] Feb 27 18:26:01 crc kubenswrapper[4708]: E0227 18:26:01.230595 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" Feb 27 18:26:01 crc kubenswrapper[4708]: I0227 18:26:01.546300 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536946-t9np9" event={"ID":"f53f4f25-29d2-43e2-b655-7389d6656a4e","Type":"ContainerStarted","Data":"e0da1a20bdaffa8a39212e3604038a5fecbee1f2a3495af306458cd8a0aa4c25"} Feb 27 18:26:02 crc kubenswrapper[4708]: E0227 18:26:02.111425 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:26:02 crc kubenswrapper[4708]: E0227 18:26:02.111618 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:26:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:26:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qkjmf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536946-t9np9_openshift-infra(f53f4f25-29d2-43e2-b655-7389d6656a4e): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:26:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:26:02 crc kubenswrapper[4708]: E0227 18:26:02.112870 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536946-t9np9" podUID="f53f4f25-29d2-43e2-b655-7389d6656a4e" Feb 27 18:26:02 crc kubenswrapper[4708]: E0227 18:26:02.564767 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536946-t9np9" podUID="f53f4f25-29d2-43e2-b655-7389d6656a4e" Feb 27 18:26:04 crc kubenswrapper[4708]: E0227 18:26:04.230571 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:26:05 crc kubenswrapper[4708]: I0227 18:26:05.357991 4708 scope.go:117] "RemoveContainer" containerID="8d6a3fa0ef187744a4abfea5b559fb4c1701c9663d4ff29e32318396bccca779" Feb 27 18:26:07 crc kubenswrapper[4708]: E0227 18:26:07.229794 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:26:12 crc kubenswrapper[4708]: E0227 18:26:12.243505 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:26:13 crc kubenswrapper[4708]: E0227 18:26:13.231039 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:26:13 crc kubenswrapper[4708]: I0227 18:26:13.682315 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96r6s" event={"ID":"63551e59-36d8-4ed5-a0c3-f425d10a51bf","Type":"ContainerStarted","Data":"2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1"} Feb 27 18:26:18 crc kubenswrapper[4708]: I0227 18:26:18.740399 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536946-t9np9" event={"ID":"f53f4f25-29d2-43e2-b655-7389d6656a4e","Type":"ContainerStarted","Data":"42b7495c4aba69fe83dcce51d43616a668753410bf771f12d1dca18f56114285"} Feb 27 18:26:18 crc kubenswrapper[4708]: I0227 18:26:18.752693 4708 generic.go:334] "Generic (PLEG): container finished" podID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerID="2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1" exitCode=0 Feb 27 18:26:18 crc kubenswrapper[4708]: I0227 18:26:18.752746 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96r6s" event={"ID":"63551e59-36d8-4ed5-a0c3-f425d10a51bf","Type":"ContainerDied","Data":"2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1"} Feb 27 18:26:18 crc kubenswrapper[4708]: I0227 18:26:18.776028 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536946-t9np9" podStartSLOduration=1.496865896 podStartE2EDuration="18.776002737s" podCreationTimestamp="2026-02-27 18:26:00 +0000 UTC" firstStartedPulling="2026-02-27 18:26:01.001468571 +0000 UTC m=+5559.517266158" lastFinishedPulling="2026-02-27 18:26:18.280605412 +0000 UTC m=+5576.796402999" observedRunningTime="2026-02-27 18:26:18.760837138 +0000 UTC m=+5577.276634715" watchObservedRunningTime="2026-02-27 18:26:18.776002737 +0000 UTC m=+5577.291800334" Feb 27 18:26:19 crc kubenswrapper[4708]: E0227 18:26:19.230347 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:26:19 crc kubenswrapper[4708]: E0227 18:26:19.231319 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:26:19 crc kubenswrapper[4708]: I0227 18:26:19.773771 4708 generic.go:334] "Generic (PLEG): container finished" podID="f53f4f25-29d2-43e2-b655-7389d6656a4e" containerID="42b7495c4aba69fe83dcce51d43616a668753410bf771f12d1dca18f56114285" exitCode=0 Feb 27 18:26:19 crc kubenswrapper[4708]: I0227 18:26:19.773919 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536946-t9np9" event={"ID":"f53f4f25-29d2-43e2-b655-7389d6656a4e","Type":"ContainerDied","Data":"42b7495c4aba69fe83dcce51d43616a668753410bf771f12d1dca18f56114285"} Feb 27 18:26:20 crc kubenswrapper[4708]: I0227 18:26:20.791882 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96r6s" event={"ID":"63551e59-36d8-4ed5-a0c3-f425d10a51bf","Type":"ContainerStarted","Data":"d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae"} Feb 27 18:26:20 crc kubenswrapper[4708]: I0227 18:26:20.820765 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-96r6s" podStartSLOduration=2.707552888 podStartE2EDuration="1m42.820732938s" podCreationTimestamp="2026-02-27 18:24:38 +0000 UTC" firstStartedPulling="2026-02-27 18:24:39.609275081 +0000 UTC m=+5478.125072668" lastFinishedPulling="2026-02-27 18:26:19.722455131 +0000 UTC m=+5578.238252718" observedRunningTime="2026-02-27 18:26:20.815210972 +0000 UTC m=+5579.331008599" watchObservedRunningTime="2026-02-27 18:26:20.820732938 +0000 UTC m=+5579.336530555" Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.279206 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536946-t9np9" Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.355620 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkjmf\" (UniqueName: \"kubernetes.io/projected/f53f4f25-29d2-43e2-b655-7389d6656a4e-kube-api-access-qkjmf\") pod \"f53f4f25-29d2-43e2-b655-7389d6656a4e\" (UID: \"f53f4f25-29d2-43e2-b655-7389d6656a4e\") " Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.361620 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f53f4f25-29d2-43e2-b655-7389d6656a4e-kube-api-access-qkjmf" (OuterVolumeSpecName: "kube-api-access-qkjmf") pod "f53f4f25-29d2-43e2-b655-7389d6656a4e" (UID: "f53f4f25-29d2-43e2-b655-7389d6656a4e"). InnerVolumeSpecName "kube-api-access-qkjmf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.458064 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkjmf\" (UniqueName: \"kubernetes.io/projected/f53f4f25-29d2-43e2-b655-7389d6656a4e-kube-api-access-qkjmf\") on node \"crc\" DevicePath \"\"" Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.810628 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536946-t9np9" event={"ID":"f53f4f25-29d2-43e2-b655-7389d6656a4e","Type":"ContainerDied","Data":"e0da1a20bdaffa8a39212e3604038a5fecbee1f2a3495af306458cd8a0aa4c25"} Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.810671 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0da1a20bdaffa8a39212e3604038a5fecbee1f2a3495af306458cd8a0aa4c25" Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.810677 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536946-t9np9" Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.850310 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536940-p7fj5"] Feb 27 18:26:21 crc kubenswrapper[4708]: I0227 18:26:21.862264 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536940-p7fj5"] Feb 27 18:26:22 crc kubenswrapper[4708]: I0227 18:26:22.255574 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c46d8a-05d5-44c8-86b3-a571832c34aa" path="/var/lib/kubelet/pods/07c46d8a-05d5-44c8-86b3-a571832c34aa/volumes" Feb 27 18:26:23 crc kubenswrapper[4708]: E0227 18:26:23.231178 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:26:24 crc kubenswrapper[4708]: E0227 18:26:24.231677 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:26:28 crc kubenswrapper[4708]: I0227 18:26:28.726090 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:26:28 crc kubenswrapper[4708]: I0227 18:26:28.726780 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:26:29 crc kubenswrapper[4708]: I0227 18:26:29.772278 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="registry-server" probeResult="failure" output=< Feb 27 18:26:29 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:26:29 crc kubenswrapper[4708]: > Feb 27 18:26:30 crc kubenswrapper[4708]: E0227 18:26:30.232436 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:26:33 crc kubenswrapper[4708]: E0227 18:26:33.230308 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:26:36 crc kubenswrapper[4708]: E0227 18:26:36.237197 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:26:38 crc kubenswrapper[4708]: E0227 18:26:38.232052 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:26:38 crc kubenswrapper[4708]: I0227 18:26:38.785247 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:26:38 crc kubenswrapper[4708]: I0227 18:26:38.830322 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:26:39 crc kubenswrapper[4708]: I0227 18:26:39.315346 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96r6s"] Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.000586 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-96r6s" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="registry-server" containerID="cri-o://d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae" gracePeriod=2 Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.550731 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.599863 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-utilities\") pod \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.599962 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-catalog-content\") pod \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.600061 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btfzs\" (UniqueName: \"kubernetes.io/projected/63551e59-36d8-4ed5-a0c3-f425d10a51bf-kube-api-access-btfzs\") pod \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\" (UID: \"63551e59-36d8-4ed5-a0c3-f425d10a51bf\") " Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.601434 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-utilities" (OuterVolumeSpecName: "utilities") pod "63551e59-36d8-4ed5-a0c3-f425d10a51bf" (UID: "63551e59-36d8-4ed5-a0c3-f425d10a51bf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.615092 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63551e59-36d8-4ed5-a0c3-f425d10a51bf-kube-api-access-btfzs" (OuterVolumeSpecName: "kube-api-access-btfzs") pod "63551e59-36d8-4ed5-a0c3-f425d10a51bf" (UID: "63551e59-36d8-4ed5-a0c3-f425d10a51bf"). InnerVolumeSpecName "kube-api-access-btfzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.701712 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.701749 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btfzs\" (UniqueName: \"kubernetes.io/projected/63551e59-36d8-4ed5-a0c3-f425d10a51bf-kube-api-access-btfzs\") on node \"crc\" DevicePath \"\"" Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.782459 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63551e59-36d8-4ed5-a0c3-f425d10a51bf" (UID: "63551e59-36d8-4ed5-a0c3-f425d10a51bf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:26:40 crc kubenswrapper[4708]: I0227 18:26:40.803881 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63551e59-36d8-4ed5-a0c3-f425d10a51bf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.013711 4708 generic.go:334] "Generic (PLEG): container finished" podID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerID="d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae" exitCode=0 Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.013786 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96r6s" event={"ID":"63551e59-36d8-4ed5-a0c3-f425d10a51bf","Type":"ContainerDied","Data":"d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae"} Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.013870 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-96r6s" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.013905 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-96r6s" event={"ID":"63551e59-36d8-4ed5-a0c3-f425d10a51bf","Type":"ContainerDied","Data":"64942dca26185b9e8695236ffabf607ed146a4aea895595311552a23a0db12e5"} Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.013942 4708 scope.go:117] "RemoveContainer" containerID="d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.045370 4708 scope.go:117] "RemoveContainer" containerID="2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.061034 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-96r6s"] Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.069760 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-96r6s"] Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.088979 4708 scope.go:117] "RemoveContainer" containerID="2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.152169 4708 scope.go:117] "RemoveContainer" containerID="d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae" Feb 27 18:26:41 crc kubenswrapper[4708]: E0227 18:26:41.152692 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae\": container with ID starting with d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae not found: ID does not exist" containerID="d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.152733 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae"} err="failed to get container status \"d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae\": rpc error: code = NotFound desc = could not find container \"d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae\": container with ID starting with d33ad6341a6acf1332d378e8fa0e4712930bbc641947520c41855a1baf53f5ae not found: ID does not exist" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.152759 4708 scope.go:117] "RemoveContainer" containerID="2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1" Feb 27 18:26:41 crc kubenswrapper[4708]: E0227 18:26:41.153240 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1\": container with ID starting with 2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1 not found: ID does not exist" containerID="2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.153289 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1"} err="failed to get container status \"2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1\": rpc error: code = NotFound desc = could not find container \"2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1\": container with ID starting with 2d8874b64d6e2fa6d58f3c5c90881240dffdf5309e9f8574f5c6f752beab0fc1 not found: ID does not exist" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.153330 4708 scope.go:117] "RemoveContainer" containerID="2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055" Feb 27 18:26:41 crc kubenswrapper[4708]: E0227 18:26:41.153620 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055\": container with ID starting with 2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055 not found: ID does not exist" containerID="2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055" Feb 27 18:26:41 crc kubenswrapper[4708]: I0227 18:26:41.153644 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055"} err="failed to get container status \"2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055\": rpc error: code = NotFound desc = could not find container \"2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055\": container with ID starting with 2fa6abb8f535408933e92197d8ea4b487b10eb6520393d5264d1d37ca2684055 not found: ID does not exist" Feb 27 18:26:42 crc kubenswrapper[4708]: I0227 18:26:42.240897 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" path="/var/lib/kubelet/pods/63551e59-36d8-4ed5-a0c3-f425d10a51bf/volumes" Feb 27 18:26:43 crc kubenswrapper[4708]: E0227 18:26:43.229663 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:26:48 crc kubenswrapper[4708]: E0227 18:26:48.232273 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:26:48 crc kubenswrapper[4708]: E0227 18:26:48.232464 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:26:53 crc kubenswrapper[4708]: E0227 18:26:53.230575 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:26:57 crc kubenswrapper[4708]: E0227 18:26:57.232165 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536926-27vc5" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" Feb 27 18:26:59 crc kubenswrapper[4708]: E0227 18:26:59.232300 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:26:59 crc kubenswrapper[4708]: E0227 18:26:59.232520 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:27:05 crc kubenswrapper[4708]: I0227 18:27:05.449440 4708 scope.go:117] "RemoveContainer" containerID="15fa4c76171b9030125850145801b86e5b4e969534e613e1dc80127c9cf89800" Feb 27 18:27:05 crc kubenswrapper[4708]: I0227 18:27:05.631401 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:27:05 crc kubenswrapper[4708]: I0227 18:27:05.631465 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:27:08 crc kubenswrapper[4708]: E0227 18:27:08.231716 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:27:10 crc kubenswrapper[4708]: E0227 18:27:10.233151 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" podUID="fb343271-5527-4655-973b-f3a35b328fce" Feb 27 18:27:11 crc kubenswrapper[4708]: E0227 18:27:11.232125 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:27:13 crc kubenswrapper[4708]: I0227 18:27:13.383563 4708 generic.go:334] "Generic (PLEG): container finished" podID="4169fe13-35f1-4450-b318-9b29670cdf2d" containerID="b3a6d6de26d2299836a77d8474214051d20d3a4b3f02f5b60369a23fbbfd16c7" exitCode=0 Feb 27 18:27:13 crc kubenswrapper[4708]: I0227 18:27:13.383660 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536926-27vc5" event={"ID":"4169fe13-35f1-4450-b318-9b29670cdf2d","Type":"ContainerDied","Data":"b3a6d6de26d2299836a77d8474214051d20d3a4b3f02f5b60369a23fbbfd16c7"} Feb 27 18:27:14 crc kubenswrapper[4708]: I0227 18:27:14.835901 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536926-27vc5" Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.027006 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l59vl\" (UniqueName: \"kubernetes.io/projected/4169fe13-35f1-4450-b318-9b29670cdf2d-kube-api-access-l59vl\") pod \"4169fe13-35f1-4450-b318-9b29670cdf2d\" (UID: \"4169fe13-35f1-4450-b318-9b29670cdf2d\") " Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.034249 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4169fe13-35f1-4450-b318-9b29670cdf2d-kube-api-access-l59vl" (OuterVolumeSpecName: "kube-api-access-l59vl") pod "4169fe13-35f1-4450-b318-9b29670cdf2d" (UID: "4169fe13-35f1-4450-b318-9b29670cdf2d"). InnerVolumeSpecName "kube-api-access-l59vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.130071 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l59vl\" (UniqueName: \"kubernetes.io/projected/4169fe13-35f1-4450-b318-9b29670cdf2d-kube-api-access-l59vl\") on node \"crc\" DevicePath \"\"" Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.406491 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536926-27vc5" event={"ID":"4169fe13-35f1-4450-b318-9b29670cdf2d","Type":"ContainerDied","Data":"9da5182d5aab0c08a54ceee82c435ab8733c0b634f326d0681c6395693522a7c"} Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.406532 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9da5182d5aab0c08a54ceee82c435ab8733c0b634f326d0681c6395693522a7c" Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.406569 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536926-27vc5" Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.926596 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536926-27vc5"] Feb 27 18:27:15 crc kubenswrapper[4708]: I0227 18:27:15.939575 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536926-27vc5"] Feb 27 18:27:16 crc kubenswrapper[4708]: I0227 18:27:16.245825 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" path="/var/lib/kubelet/pods/4169fe13-35f1-4450-b318-9b29670cdf2d/volumes" Feb 27 18:27:20 crc kubenswrapper[4708]: E0227 18:27:20.231499 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:27:25 crc kubenswrapper[4708]: E0227 18:27:25.230369 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:27:32 crc kubenswrapper[4708]: E0227 18:27:32.244092 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:27:35 crc kubenswrapper[4708]: I0227 18:27:35.631574 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:27:35 crc kubenswrapper[4708]: I0227 18:27:35.632263 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:27:36 crc kubenswrapper[4708]: E0227 18:27:36.231951 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:27:44 crc kubenswrapper[4708]: E0227 18:27:44.235098 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:27:47 crc kubenswrapper[4708]: E0227 18:27:47.231814 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:27:59 crc kubenswrapper[4708]: E0227 18:27:59.230260 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.175505 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536948-fq4r9"] Feb 27 18:28:00 crc kubenswrapper[4708]: E0227 18:28:00.176525 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f53f4f25-29d2-43e2-b655-7389d6656a4e" containerName="oc" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.176558 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f53f4f25-29d2-43e2-b655-7389d6656a4e" containerName="oc" Feb 27 18:28:00 crc kubenswrapper[4708]: E0227 18:28:00.176586 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" containerName="oc" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.176600 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" containerName="oc" Feb 27 18:28:00 crc kubenswrapper[4708]: E0227 18:28:00.176641 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="extract-content" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.176654 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="extract-content" Feb 27 18:28:00 crc kubenswrapper[4708]: E0227 18:28:00.176671 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="registry-server" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.176685 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="registry-server" Feb 27 18:28:00 crc kubenswrapper[4708]: E0227 18:28:00.176730 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="extract-utilities" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.176743 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="extract-utilities" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.177112 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4169fe13-35f1-4450-b318-9b29670cdf2d" containerName="oc" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.177168 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f53f4f25-29d2-43e2-b655-7389d6656a4e" containerName="oc" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.177203 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="63551e59-36d8-4ed5-a0c3-f425d10a51bf" containerName="registry-server" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.178422 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536948-fq4r9" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.188110 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536948-fq4r9"] Feb 27 18:28:00 crc kubenswrapper[4708]: E0227 18:28:00.230394 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.289010 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmz9m\" (UniqueName: \"kubernetes.io/projected/e786fec6-0250-4b8d-8a37-63395236230b-kube-api-access-xmz9m\") pod \"auto-csr-approver-29536948-fq4r9\" (UID: \"e786fec6-0250-4b8d-8a37-63395236230b\") " pod="openshift-infra/auto-csr-approver-29536948-fq4r9" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.390545 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmz9m\" (UniqueName: \"kubernetes.io/projected/e786fec6-0250-4b8d-8a37-63395236230b-kube-api-access-xmz9m\") pod \"auto-csr-approver-29536948-fq4r9\" (UID: \"e786fec6-0250-4b8d-8a37-63395236230b\") " pod="openshift-infra/auto-csr-approver-29536948-fq4r9" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.414162 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmz9m\" (UniqueName: \"kubernetes.io/projected/e786fec6-0250-4b8d-8a37-63395236230b-kube-api-access-xmz9m\") pod \"auto-csr-approver-29536948-fq4r9\" (UID: \"e786fec6-0250-4b8d-8a37-63395236230b\") " pod="openshift-infra/auto-csr-approver-29536948-fq4r9" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.502102 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536948-fq4r9" Feb 27 18:28:00 crc kubenswrapper[4708]: I0227 18:28:00.969227 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536948-fq4r9"] Feb 27 18:28:00 crc kubenswrapper[4708]: W0227 18:28:00.973357 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode786fec6_0250_4b8d_8a37_63395236230b.slice/crio-cc4db507db446afff572ad858c30284db0aba772eeb013404445e5fe9a64117b WatchSource:0}: Error finding container cc4db507db446afff572ad858c30284db0aba772eeb013404445e5fe9a64117b: Status 404 returned error can't find the container with id cc4db507db446afff572ad858c30284db0aba772eeb013404445e5fe9a64117b Feb 27 18:28:01 crc kubenswrapper[4708]: I0227 18:28:01.903766 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536948-fq4r9" event={"ID":"e786fec6-0250-4b8d-8a37-63395236230b","Type":"ContainerStarted","Data":"cc4db507db446afff572ad858c30284db0aba772eeb013404445e5fe9a64117b"} Feb 27 18:28:02 crc kubenswrapper[4708]: I0227 18:28:02.918315 4708 generic.go:334] "Generic (PLEG): container finished" podID="e786fec6-0250-4b8d-8a37-63395236230b" containerID="73d34910668ff4c577abe1dc6605e16ca2a7f555e75846b4a2a6f4e2fa696211" exitCode=0 Feb 27 18:28:02 crc kubenswrapper[4708]: I0227 18:28:02.918401 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536948-fq4r9" event={"ID":"e786fec6-0250-4b8d-8a37-63395236230b","Type":"ContainerDied","Data":"73d34910668ff4c577abe1dc6605e16ca2a7f555e75846b4a2a6f4e2fa696211"} Feb 27 18:28:04 crc kubenswrapper[4708]: I0227 18:28:04.428750 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536948-fq4r9" Feb 27 18:28:04 crc kubenswrapper[4708]: I0227 18:28:04.485100 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmz9m\" (UniqueName: \"kubernetes.io/projected/e786fec6-0250-4b8d-8a37-63395236230b-kube-api-access-xmz9m\") pod \"e786fec6-0250-4b8d-8a37-63395236230b\" (UID: \"e786fec6-0250-4b8d-8a37-63395236230b\") " Feb 27 18:28:04 crc kubenswrapper[4708]: I0227 18:28:04.493260 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e786fec6-0250-4b8d-8a37-63395236230b-kube-api-access-xmz9m" (OuterVolumeSpecName: "kube-api-access-xmz9m") pod "e786fec6-0250-4b8d-8a37-63395236230b" (UID: "e786fec6-0250-4b8d-8a37-63395236230b"). InnerVolumeSpecName "kube-api-access-xmz9m". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:28:04 crc kubenswrapper[4708]: I0227 18:28:04.587908 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmz9m\" (UniqueName: \"kubernetes.io/projected/e786fec6-0250-4b8d-8a37-63395236230b-kube-api-access-xmz9m\") on node \"crc\" DevicePath \"\"" Feb 27 18:28:04 crc kubenswrapper[4708]: I0227 18:28:04.943664 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536948-fq4r9" event={"ID":"e786fec6-0250-4b8d-8a37-63395236230b","Type":"ContainerDied","Data":"cc4db507db446afff572ad858c30284db0aba772eeb013404445e5fe9a64117b"} Feb 27 18:28:04 crc kubenswrapper[4708]: I0227 18:28:04.944035 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4db507db446afff572ad858c30284db0aba772eeb013404445e5fe9a64117b" Feb 27 18:28:04 crc kubenswrapper[4708]: I0227 18:28:04.943754 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536948-fq4r9" Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.522913 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536942-bbblb"] Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.540922 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536942-bbblb"] Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.632250 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.632327 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.632384 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.633518 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc88558550d87eae3a512b21fd1a12e6ff0ab0f0676c9f1b1877d03be6f078fe"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.633604 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://bc88558550d87eae3a512b21fd1a12e6ff0ab0f0676c9f1b1877d03be6f078fe" gracePeriod=600 Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.956693 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="bc88558550d87eae3a512b21fd1a12e6ff0ab0f0676c9f1b1877d03be6f078fe" exitCode=0 Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.956811 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"bc88558550d87eae3a512b21fd1a12e6ff0ab0f0676c9f1b1877d03be6f078fe"} Feb 27 18:28:05 crc kubenswrapper[4708]: I0227 18:28:05.957034 4708 scope.go:117] "RemoveContainer" containerID="d51bf1ebf40211094a97fe78aa6495e5626b843066be63e57fe36c86fe65e785" Feb 27 18:28:06 crc kubenswrapper[4708]: I0227 18:28:06.248278 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6938980b-8ef4-4ded-9afe-3e2adbc609ec" path="/var/lib/kubelet/pods/6938980b-8ef4-4ded-9afe-3e2adbc609ec/volumes" Feb 27 18:28:06 crc kubenswrapper[4708]: I0227 18:28:06.969952 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc"} Feb 27 18:28:11 crc kubenswrapper[4708]: E0227 18:28:11.230857 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:28:12 crc kubenswrapper[4708]: E0227 18:28:12.245315 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:28:15 crc kubenswrapper[4708]: I0227 18:28:15.068299 4708 generic.go:334] "Generic (PLEG): container finished" podID="fb343271-5527-4655-973b-f3a35b328fce" containerID="3ba0bfeeb3d331982f287a2f1dbaf9458f66246784570abd3186709edb49bf1a" exitCode=0 Feb 27 18:28:15 crc kubenswrapper[4708]: I0227 18:28:15.068406 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" event={"ID":"fb343271-5527-4655-973b-f3a35b328fce","Type":"ContainerDied","Data":"3ba0bfeeb3d331982f287a2f1dbaf9458f66246784570abd3186709edb49bf1a"} Feb 27 18:28:16 crc kubenswrapper[4708]: I0227 18:28:16.558077 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" Feb 27 18:28:16 crc kubenswrapper[4708]: I0227 18:28:16.637680 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dm64\" (UniqueName: \"kubernetes.io/projected/fb343271-5527-4655-973b-f3a35b328fce-kube-api-access-6dm64\") pod \"fb343271-5527-4655-973b-f3a35b328fce\" (UID: \"fb343271-5527-4655-973b-f3a35b328fce\") " Feb 27 18:28:16 crc kubenswrapper[4708]: I0227 18:28:16.649776 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb343271-5527-4655-973b-f3a35b328fce-kube-api-access-6dm64" (OuterVolumeSpecName: "kube-api-access-6dm64") pod "fb343271-5527-4655-973b-f3a35b328fce" (UID: "fb343271-5527-4655-973b-f3a35b328fce"). InnerVolumeSpecName "kube-api-access-6dm64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:28:16 crc kubenswrapper[4708]: I0227 18:28:16.740441 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6dm64\" (UniqueName: \"kubernetes.io/projected/fb343271-5527-4655-973b-f3a35b328fce-kube-api-access-6dm64\") on node \"crc\" DevicePath \"\"" Feb 27 18:28:17 crc kubenswrapper[4708]: I0227 18:28:17.094720 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" event={"ID":"fb343271-5527-4655-973b-f3a35b328fce","Type":"ContainerDied","Data":"3e36e817ad737307f4b7aa27952356ad380b17a3008f14eb670f56a3e8d815ee"} Feb 27 18:28:17 crc kubenswrapper[4708]: I0227 18:28:17.094781 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e36e817ad737307f4b7aa27952356ad380b17a3008f14eb670f56a3e8d815ee" Feb 27 18:28:17 crc kubenswrapper[4708]: I0227 18:28:17.094777 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536930-d9sgn" Feb 27 18:28:17 crc kubenswrapper[4708]: I0227 18:28:17.645159 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536930-d9sgn"] Feb 27 18:28:17 crc kubenswrapper[4708]: I0227 18:28:17.652953 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536930-d9sgn"] Feb 27 18:28:18 crc kubenswrapper[4708]: I0227 18:28:18.251446 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb343271-5527-4655-973b-f3a35b328fce" path="/var/lib/kubelet/pods/fb343271-5527-4655-973b-f3a35b328fce/volumes" Feb 27 18:28:23 crc kubenswrapper[4708]: E0227 18:28:23.231181 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:28:26 crc kubenswrapper[4708]: E0227 18:28:26.232461 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:28:34 crc kubenswrapper[4708]: E0227 18:28:34.231779 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:28:37 crc kubenswrapper[4708]: E0227 18:28:37.231983 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:28:48 crc kubenswrapper[4708]: E0227 18:28:48.232697 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:28:52 crc kubenswrapper[4708]: E0227 18:28:52.250629 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:29:02 crc kubenswrapper[4708]: E0227 18:29:02.243567 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:29:05 crc kubenswrapper[4708]: E0227 18:29:05.231466 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:29:05 crc kubenswrapper[4708]: I0227 18:29:05.607540 4708 scope.go:117] "RemoveContainer" containerID="11dfb7da0010c92d6ee38200e71fe986232df57e849d6b014b3dbeefd49d6725" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.211462 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6zl85"] Feb 27 18:29:07 crc kubenswrapper[4708]: E0227 18:29:07.212303 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb343271-5527-4655-973b-f3a35b328fce" containerName="oc" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.212320 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb343271-5527-4655-973b-f3a35b328fce" containerName="oc" Feb 27 18:29:07 crc kubenswrapper[4708]: E0227 18:29:07.212334 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e786fec6-0250-4b8d-8a37-63395236230b" containerName="oc" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.212343 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e786fec6-0250-4b8d-8a37-63395236230b" containerName="oc" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.212579 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e786fec6-0250-4b8d-8a37-63395236230b" containerName="oc" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.212603 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb343271-5527-4655-973b-f3a35b328fce" containerName="oc" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.214405 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.250911 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zl85"] Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.295471 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nx7sc\" (UniqueName: \"kubernetes.io/projected/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-kube-api-access-nx7sc\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.296043 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-utilities\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.296414 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-catalog-content\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.398509 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nx7sc\" (UniqueName: \"kubernetes.io/projected/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-kube-api-access-nx7sc\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.398641 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-utilities\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.398706 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-catalog-content\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.399271 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-catalog-content\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.399269 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-utilities\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:07 crc kubenswrapper[4708]: I0227 18:29:07.927979 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nx7sc\" (UniqueName: \"kubernetes.io/projected/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-kube-api-access-nx7sc\") pod \"community-operators-6zl85\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:08 crc kubenswrapper[4708]: I0227 18:29:08.141512 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:08 crc kubenswrapper[4708]: W0227 18:29:08.641741 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod993da7ad_6c24_4bbe_acb9_7d3d4de5a3c1.slice/crio-1aeded76baf59ae72b086a4d1518f6dc18e73683ad2ec97c326762bfa311a175 WatchSource:0}: Error finding container 1aeded76baf59ae72b086a4d1518f6dc18e73683ad2ec97c326762bfa311a175: Status 404 returned error can't find the container with id 1aeded76baf59ae72b086a4d1518f6dc18e73683ad2ec97c326762bfa311a175 Feb 27 18:29:08 crc kubenswrapper[4708]: I0227 18:29:08.642706 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6zl85"] Feb 27 18:29:08 crc kubenswrapper[4708]: I0227 18:29:08.733076 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zl85" event={"ID":"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1","Type":"ContainerStarted","Data":"1aeded76baf59ae72b086a4d1518f6dc18e73683ad2ec97c326762bfa311a175"} Feb 27 18:29:09 crc kubenswrapper[4708]: I0227 18:29:09.747701 4708 generic.go:334] "Generic (PLEG): container finished" podID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerID="4c467643749dac7491caa6c1987137570d0f60e3670c4fc2d36936da24aa4f88" exitCode=0 Feb 27 18:29:09 crc kubenswrapper[4708]: I0227 18:29:09.747903 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zl85" event={"ID":"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1","Type":"ContainerDied","Data":"4c467643749dac7491caa6c1987137570d0f60e3670c4fc2d36936da24aa4f88"} Feb 27 18:29:09 crc kubenswrapper[4708]: I0227 18:29:09.751024 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:29:10 crc kubenswrapper[4708]: I0227 18:29:10.761726 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zl85" event={"ID":"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1","Type":"ContainerStarted","Data":"5ea1539f12045a376c01505b0e58d0a58c80d8d0c5e680ec9970e4c34c0dd535"} Feb 27 18:29:12 crc kubenswrapper[4708]: I0227 18:29:12.786118 4708 generic.go:334] "Generic (PLEG): container finished" podID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerID="5ea1539f12045a376c01505b0e58d0a58c80d8d0c5e680ec9970e4c34c0dd535" exitCode=0 Feb 27 18:29:12 crc kubenswrapper[4708]: I0227 18:29:12.786167 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zl85" event={"ID":"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1","Type":"ContainerDied","Data":"5ea1539f12045a376c01505b0e58d0a58c80d8d0c5e680ec9970e4c34c0dd535"} Feb 27 18:29:13 crc kubenswrapper[4708]: I0227 18:29:13.803827 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zl85" event={"ID":"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1","Type":"ContainerStarted","Data":"38c53e94c7fb8707a55f9153ed8dce06179f4a6175bb47eb9988f81a77c4d23a"} Feb 27 18:29:13 crc kubenswrapper[4708]: I0227 18:29:13.837893 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6zl85" podStartSLOduration=3.36121086 podStartE2EDuration="6.837866311s" podCreationTimestamp="2026-02-27 18:29:07 +0000 UTC" firstStartedPulling="2026-02-27 18:29:09.750567668 +0000 UTC m=+5748.266365285" lastFinishedPulling="2026-02-27 18:29:13.227223109 +0000 UTC m=+5751.743020736" observedRunningTime="2026-02-27 18:29:13.82331904 +0000 UTC m=+5752.339116667" watchObservedRunningTime="2026-02-27 18:29:13.837866311 +0000 UTC m=+5752.353663948" Feb 27 18:29:15 crc kubenswrapper[4708]: E0227 18:29:15.230639 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:29:16 crc kubenswrapper[4708]: E0227 18:29:16.230039 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:29:18 crc kubenswrapper[4708]: I0227 18:29:18.141649 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:18 crc kubenswrapper[4708]: I0227 18:29:18.141964 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:18 crc kubenswrapper[4708]: I0227 18:29:18.212827 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:18 crc kubenswrapper[4708]: I0227 18:29:18.943982 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:19 crc kubenswrapper[4708]: I0227 18:29:19.022330 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zl85"] Feb 27 18:29:20 crc kubenswrapper[4708]: I0227 18:29:20.882187 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6zl85" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="registry-server" containerID="cri-o://38c53e94c7fb8707a55f9153ed8dce06179f4a6175bb47eb9988f81a77c4d23a" gracePeriod=2 Feb 27 18:29:21 crc kubenswrapper[4708]: I0227 18:29:21.898786 4708 generic.go:334] "Generic (PLEG): container finished" podID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerID="38c53e94c7fb8707a55f9153ed8dce06179f4a6175bb47eb9988f81a77c4d23a" exitCode=0 Feb 27 18:29:21 crc kubenswrapper[4708]: I0227 18:29:21.898928 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zl85" event={"ID":"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1","Type":"ContainerDied","Data":"38c53e94c7fb8707a55f9153ed8dce06179f4a6175bb47eb9988f81a77c4d23a"} Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.153354 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.338235 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-utilities\") pod \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.338522 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-catalog-content\") pod \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.338641 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nx7sc\" (UniqueName: \"kubernetes.io/projected/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-kube-api-access-nx7sc\") pod \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\" (UID: \"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1\") " Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.340091 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-utilities" (OuterVolumeSpecName: "utilities") pod "993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" (UID: "993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.348273 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-kube-api-access-nx7sc" (OuterVolumeSpecName: "kube-api-access-nx7sc") pod "993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" (UID: "993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1"). InnerVolumeSpecName "kube-api-access-nx7sc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.405349 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" (UID: "993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.441644 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.441684 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.441701 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nx7sc\" (UniqueName: \"kubernetes.io/projected/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1-kube-api-access-nx7sc\") on node \"crc\" DevicePath \"\"" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.916497 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6zl85" event={"ID":"993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1","Type":"ContainerDied","Data":"1aeded76baf59ae72b086a4d1518f6dc18e73683ad2ec97c326762bfa311a175"} Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.916569 4708 scope.go:117] "RemoveContainer" containerID="38c53e94c7fb8707a55f9153ed8dce06179f4a6175bb47eb9988f81a77c4d23a" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.916615 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6zl85" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.964670 4708 scope.go:117] "RemoveContainer" containerID="5ea1539f12045a376c01505b0e58d0a58c80d8d0c5e680ec9970e4c34c0dd535" Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.976054 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6zl85"] Feb 27 18:29:22 crc kubenswrapper[4708]: I0227 18:29:22.987179 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6zl85"] Feb 27 18:29:23 crc kubenswrapper[4708]: I0227 18:29:23.006749 4708 scope.go:117] "RemoveContainer" containerID="4c467643749dac7491caa6c1987137570d0f60e3670c4fc2d36936da24aa4f88" Feb 27 18:29:24 crc kubenswrapper[4708]: I0227 18:29:24.252036 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" path="/var/lib/kubelet/pods/993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1/volumes" Feb 27 18:29:27 crc kubenswrapper[4708]: E0227 18:29:27.232774 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:29:30 crc kubenswrapper[4708]: E0227 18:29:30.137516 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:29:30 crc kubenswrapper[4708]: E0227 18:29:30.138180 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:29:30 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:29:30 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tb8pv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536928-k2dpc_openshift-infra(7be693cf-322d-4ac9-b66c-35a281510ef4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:29:30 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:29:30 crc kubenswrapper[4708]: E0227 18:29:30.139813 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:29:39 crc kubenswrapper[4708]: E0227 18:29:39.233034 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.284371 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-hqnb8"] Feb 27 18:29:40 crc kubenswrapper[4708]: E0227 18:29:40.285130 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="registry-server" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.285145 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="registry-server" Feb 27 18:29:40 crc kubenswrapper[4708]: E0227 18:29:40.285168 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="extract-utilities" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.285176 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="extract-utilities" Feb 27 18:29:40 crc kubenswrapper[4708]: E0227 18:29:40.285188 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="extract-content" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.285195 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="extract-content" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.285476 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="993da7ad-6c24-4bbe-acb9-7d3d4de5a3c1" containerName="registry-server" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.287453 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.307396 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqnb8"] Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.382140 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gw8rd\" (UniqueName: \"kubernetes.io/projected/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-kube-api-access-gw8rd\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.382243 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-utilities\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.382487 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-catalog-content\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.485037 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gw8rd\" (UniqueName: \"kubernetes.io/projected/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-kube-api-access-gw8rd\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.485177 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-utilities\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.485430 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-catalog-content\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.485726 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-utilities\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.486213 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-catalog-content\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.510002 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gw8rd\" (UniqueName: \"kubernetes.io/projected/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-kube-api-access-gw8rd\") pod \"certified-operators-hqnb8\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:40 crc kubenswrapper[4708]: I0227 18:29:40.624195 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:41 crc kubenswrapper[4708]: I0227 18:29:41.133733 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-hqnb8"] Feb 27 18:29:41 crc kubenswrapper[4708]: W0227 18:29:41.135724 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode4222c1c_d2d0_4901_b9b6_4b158e6aabe6.slice/crio-8590c71896132cfb45a73d24fa0ffd3921ead6765b8461957db3803140e5c1d5 WatchSource:0}: Error finding container 8590c71896132cfb45a73d24fa0ffd3921ead6765b8461957db3803140e5c1d5: Status 404 returned error can't find the container with id 8590c71896132cfb45a73d24fa0ffd3921ead6765b8461957db3803140e5c1d5 Feb 27 18:29:41 crc kubenswrapper[4708]: E0227 18:29:41.229104 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:29:42 crc kubenswrapper[4708]: I0227 18:29:42.141762 4708 generic.go:334] "Generic (PLEG): container finished" podID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerID="863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90" exitCode=0 Feb 27 18:29:42 crc kubenswrapper[4708]: I0227 18:29:42.141816 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqnb8" event={"ID":"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6","Type":"ContainerDied","Data":"863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90"} Feb 27 18:29:42 crc kubenswrapper[4708]: I0227 18:29:42.141898 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqnb8" event={"ID":"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6","Type":"ContainerStarted","Data":"8590c71896132cfb45a73d24fa0ffd3921ead6765b8461957db3803140e5c1d5"} Feb 27 18:29:44 crc kubenswrapper[4708]: I0227 18:29:44.167496 4708 generic.go:334] "Generic (PLEG): container finished" podID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerID="6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2" exitCode=0 Feb 27 18:29:44 crc kubenswrapper[4708]: I0227 18:29:44.167535 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqnb8" event={"ID":"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6","Type":"ContainerDied","Data":"6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2"} Feb 27 18:29:46 crc kubenswrapper[4708]: I0227 18:29:46.189574 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqnb8" event={"ID":"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6","Type":"ContainerStarted","Data":"439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a"} Feb 27 18:29:46 crc kubenswrapper[4708]: I0227 18:29:46.219358 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-hqnb8" podStartSLOduration=3.333086169 podStartE2EDuration="6.21933702s" podCreationTimestamp="2026-02-27 18:29:40 +0000 UTC" firstStartedPulling="2026-02-27 18:29:42.14457026 +0000 UTC m=+5780.660367877" lastFinishedPulling="2026-02-27 18:29:45.030821141 +0000 UTC m=+5783.546618728" observedRunningTime="2026-02-27 18:29:46.20875265 +0000 UTC m=+5784.724550257" watchObservedRunningTime="2026-02-27 18:29:46.21933702 +0000 UTC m=+5784.735134627" Feb 27 18:29:50 crc kubenswrapper[4708]: I0227 18:29:50.626006 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:50 crc kubenswrapper[4708]: I0227 18:29:50.626528 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:50 crc kubenswrapper[4708]: I0227 18:29:50.700100 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:51 crc kubenswrapper[4708]: E0227 18:29:51.231494 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:29:51 crc kubenswrapper[4708]: I0227 18:29:51.324123 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:51 crc kubenswrapper[4708]: I0227 18:29:51.401497 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqnb8"] Feb 27 18:29:53 crc kubenswrapper[4708]: E0227 18:29:53.232018 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:29:53 crc kubenswrapper[4708]: I0227 18:29:53.270659 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-hqnb8" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="registry-server" containerID="cri-o://439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a" gracePeriod=2 Feb 27 18:29:53 crc kubenswrapper[4708]: I0227 18:29:53.873840 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.022661 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gw8rd\" (UniqueName: \"kubernetes.io/projected/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-kube-api-access-gw8rd\") pod \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.022830 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-utilities\") pod \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.023009 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-catalog-content\") pod \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\" (UID: \"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6\") " Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.024493 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-utilities" (OuterVolumeSpecName: "utilities") pod "e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" (UID: "e4222c1c-d2d0-4901-b9b6-4b158e6aabe6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.032143 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-kube-api-access-gw8rd" (OuterVolumeSpecName: "kube-api-access-gw8rd") pod "e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" (UID: "e4222c1c-d2d0-4901-b9b6-4b158e6aabe6"). InnerVolumeSpecName "kube-api-access-gw8rd". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.082706 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" (UID: "e4222c1c-d2d0-4901-b9b6-4b158e6aabe6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.126307 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.126339 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gw8rd\" (UniqueName: \"kubernetes.io/projected/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-kube-api-access-gw8rd\") on node \"crc\" DevicePath \"\"" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.126351 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.285127 4708 generic.go:334] "Generic (PLEG): container finished" podID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerID="439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a" exitCode=0 Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.285199 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqnb8" event={"ID":"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6","Type":"ContainerDied","Data":"439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a"} Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.285243 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-hqnb8" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.285292 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-hqnb8" event={"ID":"e4222c1c-d2d0-4901-b9b6-4b158e6aabe6","Type":"ContainerDied","Data":"8590c71896132cfb45a73d24fa0ffd3921ead6765b8461957db3803140e5c1d5"} Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.285327 4708 scope.go:117] "RemoveContainer" containerID="439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.319996 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-hqnb8"] Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.328768 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-hqnb8"] Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.328933 4708 scope.go:117] "RemoveContainer" containerID="6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.366411 4708 scope.go:117] "RemoveContainer" containerID="863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.450624 4708 scope.go:117] "RemoveContainer" containerID="439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a" Feb 27 18:29:54 crc kubenswrapper[4708]: E0227 18:29:54.451328 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a\": container with ID starting with 439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a not found: ID does not exist" containerID="439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.451403 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a"} err="failed to get container status \"439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a\": rpc error: code = NotFound desc = could not find container \"439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a\": container with ID starting with 439b542cb127ec8d7c601269970129e7740f69ebbd27bfcf921e377b5e412b2a not found: ID does not exist" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.451451 4708 scope.go:117] "RemoveContainer" containerID="6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2" Feb 27 18:29:54 crc kubenswrapper[4708]: E0227 18:29:54.454360 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2\": container with ID starting with 6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2 not found: ID does not exist" containerID="6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.454395 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2"} err="failed to get container status \"6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2\": rpc error: code = NotFound desc = could not find container \"6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2\": container with ID starting with 6965337233beb66c8fb3b4100659f9182b967a902a577dc3f229d243f43bf3a2 not found: ID does not exist" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.454421 4708 scope.go:117] "RemoveContainer" containerID="863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90" Feb 27 18:29:54 crc kubenswrapper[4708]: E0227 18:29:54.454843 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90\": container with ID starting with 863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90 not found: ID does not exist" containerID="863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90" Feb 27 18:29:54 crc kubenswrapper[4708]: I0227 18:29:54.454900 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90"} err="failed to get container status \"863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90\": rpc error: code = NotFound desc = could not find container \"863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90\": container with ID starting with 863dd1f97cd9f951c8bcdeeb5addb56a7c75d12ccfe8840752fc04163a41ef90 not found: ID does not exist" Feb 27 18:29:56 crc kubenswrapper[4708]: I0227 18:29:56.248984 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" path="/var/lib/kubelet/pods/e4222c1c-d2d0-4901-b9b6-4b158e6aabe6/volumes" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.173546 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536950-mpkfr"] Feb 27 18:30:00 crc kubenswrapper[4708]: E0227 18:30:00.175102 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="extract-utilities" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.175126 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="extract-utilities" Feb 27 18:30:00 crc kubenswrapper[4708]: E0227 18:30:00.175155 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="extract-content" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.175170 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="extract-content" Feb 27 18:30:00 crc kubenswrapper[4708]: E0227 18:30:00.175232 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="registry-server" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.175243 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="registry-server" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.175571 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e4222c1c-d2d0-4901-b9b6-4b158e6aabe6" containerName="registry-server" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.176715 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.199111 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536950-mpkfr"] Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.264181 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj"] Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.265895 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.268192 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.269031 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.277167 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkssn\" (UniqueName: \"kubernetes.io/projected/812d28f8-6380-4708-8aaf-cc2d7f91c736-kube-api-access-hkssn\") pod \"auto-csr-approver-29536950-mpkfr\" (UID: \"812d28f8-6380-4708-8aaf-cc2d7f91c736\") " pod="openshift-infra/auto-csr-approver-29536950-mpkfr" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.282913 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj"] Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.379391 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2923d922-34e8-425a-9e01-131e2863d638-config-volume\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.379600 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qr2z\" (UniqueName: \"kubernetes.io/projected/2923d922-34e8-425a-9e01-131e2863d638-kube-api-access-6qr2z\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.380098 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2923d922-34e8-425a-9e01-131e2863d638-secret-volume\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.380421 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkssn\" (UniqueName: \"kubernetes.io/projected/812d28f8-6380-4708-8aaf-cc2d7f91c736-kube-api-access-hkssn\") pod \"auto-csr-approver-29536950-mpkfr\" (UID: \"812d28f8-6380-4708-8aaf-cc2d7f91c736\") " pod="openshift-infra/auto-csr-approver-29536950-mpkfr" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.411297 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkssn\" (UniqueName: \"kubernetes.io/projected/812d28f8-6380-4708-8aaf-cc2d7f91c736-kube-api-access-hkssn\") pod \"auto-csr-approver-29536950-mpkfr\" (UID: \"812d28f8-6380-4708-8aaf-cc2d7f91c736\") " pod="openshift-infra/auto-csr-approver-29536950-mpkfr" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.483080 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2923d922-34e8-425a-9e01-131e2863d638-config-volume\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.483218 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qr2z\" (UniqueName: \"kubernetes.io/projected/2923d922-34e8-425a-9e01-131e2863d638-kube-api-access-6qr2z\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.483363 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2923d922-34e8-425a-9e01-131e2863d638-secret-volume\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.484802 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2923d922-34e8-425a-9e01-131e2863d638-config-volume\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.489174 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2923d922-34e8-425a-9e01-131e2863d638-secret-volume\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.513649 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qr2z\" (UniqueName: \"kubernetes.io/projected/2923d922-34e8-425a-9e01-131e2863d638-kube-api-access-6qr2z\") pod \"collect-profiles-29536950-skxdj\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.517406 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" Feb 27 18:30:00 crc kubenswrapper[4708]: I0227 18:30:00.587790 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:01 crc kubenswrapper[4708]: I0227 18:30:01.058656 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536950-mpkfr"] Feb 27 18:30:01 crc kubenswrapper[4708]: W0227 18:30:01.186390 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2923d922_34e8_425a_9e01_131e2863d638.slice/crio-97e908b9b941351ed892b1b3dc8010d71c7b781a28d74c87366a3b36f6b3b10f WatchSource:0}: Error finding container 97e908b9b941351ed892b1b3dc8010d71c7b781a28d74c87366a3b36f6b3b10f: Status 404 returned error can't find the container with id 97e908b9b941351ed892b1b3dc8010d71c7b781a28d74c87366a3b36f6b3b10f Feb 27 18:30:01 crc kubenswrapper[4708]: I0227 18:30:01.188347 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj"] Feb 27 18:30:01 crc kubenswrapper[4708]: I0227 18:30:01.366324 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" event={"ID":"812d28f8-6380-4708-8aaf-cc2d7f91c736","Type":"ContainerStarted","Data":"24cb0581841f939261f49bd45a1e013579e3173e9fe39081d03018b9231a1860"} Feb 27 18:30:01 crc kubenswrapper[4708]: I0227 18:30:01.367991 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" event={"ID":"2923d922-34e8-425a-9e01-131e2863d638","Type":"ContainerStarted","Data":"97e908b9b941351ed892b1b3dc8010d71c7b781a28d74c87366a3b36f6b3b10f"} Feb 27 18:30:02 crc kubenswrapper[4708]: I0227 18:30:02.379156 4708 generic.go:334] "Generic (PLEG): container finished" podID="2923d922-34e8-425a-9e01-131e2863d638" containerID="3205f02eed21b78710f4ae11cf95ebd3a93ac9253cd47f9536545ac8ba75b811" exitCode=0 Feb 27 18:30:02 crc kubenswrapper[4708]: I0227 18:30:02.379243 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" event={"ID":"2923d922-34e8-425a-9e01-131e2863d638","Type":"ContainerDied","Data":"3205f02eed21b78710f4ae11cf95ebd3a93ac9253cd47f9536545ac8ba75b811"} Feb 27 18:30:02 crc kubenswrapper[4708]: E0227 18:30:02.624295 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:30:02 crc kubenswrapper[4708]: E0227 18:30:02.624911 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:30:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:30:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-hkssn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536950-mpkfr_openshift-infra(812d28f8-6380-4708-8aaf-cc2d7f91c736): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:30:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:30:02 crc kubenswrapper[4708]: E0227 18:30:02.626551 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" podUID="812d28f8-6380-4708-8aaf-cc2d7f91c736" Feb 27 18:30:03 crc kubenswrapper[4708]: E0227 18:30:03.393249 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" podUID="812d28f8-6380-4708-8aaf-cc2d7f91c736" Feb 27 18:30:03 crc kubenswrapper[4708]: I0227 18:30:03.909888 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.070534 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2923d922-34e8-425a-9e01-131e2863d638-config-volume\") pod \"2923d922-34e8-425a-9e01-131e2863d638\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.070576 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6qr2z\" (UniqueName: \"kubernetes.io/projected/2923d922-34e8-425a-9e01-131e2863d638-kube-api-access-6qr2z\") pod \"2923d922-34e8-425a-9e01-131e2863d638\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.070621 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2923d922-34e8-425a-9e01-131e2863d638-secret-volume\") pod \"2923d922-34e8-425a-9e01-131e2863d638\" (UID: \"2923d922-34e8-425a-9e01-131e2863d638\") " Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.071801 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2923d922-34e8-425a-9e01-131e2863d638-config-volume" (OuterVolumeSpecName: "config-volume") pod "2923d922-34e8-425a-9e01-131e2863d638" (UID: "2923d922-34e8-425a-9e01-131e2863d638"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.077362 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2923d922-34e8-425a-9e01-131e2863d638-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2923d922-34e8-425a-9e01-131e2863d638" (UID: "2923d922-34e8-425a-9e01-131e2863d638"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.077595 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2923d922-34e8-425a-9e01-131e2863d638-kube-api-access-6qr2z" (OuterVolumeSpecName: "kube-api-access-6qr2z") pod "2923d922-34e8-425a-9e01-131e2863d638" (UID: "2923d922-34e8-425a-9e01-131e2863d638"). InnerVolumeSpecName "kube-api-access-6qr2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.173466 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2923d922-34e8-425a-9e01-131e2863d638-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.173502 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6qr2z\" (UniqueName: \"kubernetes.io/projected/2923d922-34e8-425a-9e01-131e2863d638-kube-api-access-6qr2z\") on node \"crc\" DevicePath \"\"" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.173516 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2923d922-34e8-425a-9e01-131e2863d638-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:30:04 crc kubenswrapper[4708]: E0227 18:30:04.231030 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.402929 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" event={"ID":"2923d922-34e8-425a-9e01-131e2863d638","Type":"ContainerDied","Data":"97e908b9b941351ed892b1b3dc8010d71c7b781a28d74c87366a3b36f6b3b10f"} Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.403016 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97e908b9b941351ed892b1b3dc8010d71c7b781a28d74c87366a3b36f6b3b10f" Feb 27 18:30:04 crc kubenswrapper[4708]: I0227 18:30:04.403028 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj" Feb 27 18:30:05 crc kubenswrapper[4708]: I0227 18:30:05.003432 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr"] Feb 27 18:30:05 crc kubenswrapper[4708]: I0227 18:30:05.020417 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536905-88wqr"] Feb 27 18:30:05 crc kubenswrapper[4708]: I0227 18:30:05.631870 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:30:05 crc kubenswrapper[4708]: I0227 18:30:05.631936 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:30:06 crc kubenswrapper[4708]: E0227 18:30:06.230179 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" Feb 27 18:30:06 crc kubenswrapper[4708]: I0227 18:30:06.240485 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5274b8a-9803-4096-a176-01de86c631f3" path="/var/lib/kubelet/pods/a5274b8a-9803-4096-a176-01de86c631f3/volumes" Feb 27 18:30:16 crc kubenswrapper[4708]: I0227 18:30:16.533054 4708 generic.go:334] "Generic (PLEG): container finished" podID="812d28f8-6380-4708-8aaf-cc2d7f91c736" containerID="dcb05190da20622f72d75973240430fc1e89dac32d274aca8ee7664f9691506e" exitCode=0 Feb 27 18:30:16 crc kubenswrapper[4708]: I0227 18:30:16.533096 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" event={"ID":"812d28f8-6380-4708-8aaf-cc2d7f91c736","Type":"ContainerDied","Data":"dcb05190da20622f72d75973240430fc1e89dac32d274aca8ee7664f9691506e"} Feb 27 18:30:17 crc kubenswrapper[4708]: I0227 18:30:17.959475 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" Feb 27 18:30:18 crc kubenswrapper[4708]: I0227 18:30:18.084623 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hkssn\" (UniqueName: \"kubernetes.io/projected/812d28f8-6380-4708-8aaf-cc2d7f91c736-kube-api-access-hkssn\") pod \"812d28f8-6380-4708-8aaf-cc2d7f91c736\" (UID: \"812d28f8-6380-4708-8aaf-cc2d7f91c736\") " Feb 27 18:30:18 crc kubenswrapper[4708]: I0227 18:30:18.090105 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/812d28f8-6380-4708-8aaf-cc2d7f91c736-kube-api-access-hkssn" (OuterVolumeSpecName: "kube-api-access-hkssn") pod "812d28f8-6380-4708-8aaf-cc2d7f91c736" (UID: "812d28f8-6380-4708-8aaf-cc2d7f91c736"). InnerVolumeSpecName "kube-api-access-hkssn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:30:18 crc kubenswrapper[4708]: I0227 18:30:18.186808 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hkssn\" (UniqueName: \"kubernetes.io/projected/812d28f8-6380-4708-8aaf-cc2d7f91c736-kube-api-access-hkssn\") on node \"crc\" DevicePath \"\"" Feb 27 18:30:18 crc kubenswrapper[4708]: I0227 18:30:18.558680 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" event={"ID":"812d28f8-6380-4708-8aaf-cc2d7f91c736","Type":"ContainerDied","Data":"24cb0581841f939261f49bd45a1e013579e3173e9fe39081d03018b9231a1860"} Feb 27 18:30:18 crc kubenswrapper[4708]: I0227 18:30:18.558735 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24cb0581841f939261f49bd45a1e013579e3173e9fe39081d03018b9231a1860" Feb 27 18:30:18 crc kubenswrapper[4708]: I0227 18:30:18.558746 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536950-mpkfr" Feb 27 18:30:19 crc kubenswrapper[4708]: I0227 18:30:19.057321 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536944-mc9xt"] Feb 27 18:30:19 crc kubenswrapper[4708]: I0227 18:30:19.076948 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536944-mc9xt"] Feb 27 18:30:19 crc kubenswrapper[4708]: E0227 18:30:19.230827 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:30:20 crc kubenswrapper[4708]: I0227 18:30:20.249335 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a060715-2648-4f1c-ab55-1633203a02c2" path="/var/lib/kubelet/pods/8a060715-2648-4f1c-ab55-1633203a02c2/volumes" Feb 27 18:30:22 crc kubenswrapper[4708]: I0227 18:30:22.620539 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" event={"ID":"b35a5adf-48a7-4e39-9491-c45f9b71b9b7","Type":"ContainerStarted","Data":"af88185df3e3c2aed3db820897f58690c860959615ea697c9b978cb3f02912ff"} Feb 27 18:30:22 crc kubenswrapper[4708]: I0227 18:30:22.641085 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" podStartSLOduration=1.5703093190000001 podStartE2EDuration="16m22.641057267s" podCreationTimestamp="2026-02-27 18:14:00 +0000 UTC" firstStartedPulling="2026-02-27 18:14:01.004040503 +0000 UTC m=+4839.519838130" lastFinishedPulling="2026-02-27 18:30:22.074788451 +0000 UTC m=+5820.590586078" observedRunningTime="2026-02-27 18:30:22.636281162 +0000 UTC m=+5821.152078769" watchObservedRunningTime="2026-02-27 18:30:22.641057267 +0000 UTC m=+5821.156854864" Feb 27 18:30:23 crc kubenswrapper[4708]: I0227 18:30:23.645137 4708 generic.go:334] "Generic (PLEG): container finished" podID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" containerID="af88185df3e3c2aed3db820897f58690c860959615ea697c9b978cb3f02912ff" exitCode=0 Feb 27 18:30:23 crc kubenswrapper[4708]: I0227 18:30:23.646897 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" event={"ID":"b35a5adf-48a7-4e39-9491-c45f9b71b9b7","Type":"ContainerDied","Data":"af88185df3e3c2aed3db820897f58690c860959615ea697c9b978cb3f02912ff"} Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.180804 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.251667 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmkn7\" (UniqueName: \"kubernetes.io/projected/b35a5adf-48a7-4e39-9491-c45f9b71b9b7-kube-api-access-xmkn7\") pod \"b35a5adf-48a7-4e39-9491-c45f9b71b9b7\" (UID: \"b35a5adf-48a7-4e39-9491-c45f9b71b9b7\") " Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.264915 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b35a5adf-48a7-4e39-9491-c45f9b71b9b7-kube-api-access-xmkn7" (OuterVolumeSpecName: "kube-api-access-xmkn7") pod "b35a5adf-48a7-4e39-9491-c45f9b71b9b7" (UID: "b35a5adf-48a7-4e39-9491-c45f9b71b9b7"). InnerVolumeSpecName "kube-api-access-xmkn7". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.354598 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmkn7\" (UniqueName: \"kubernetes.io/projected/b35a5adf-48a7-4e39-9491-c45f9b71b9b7-kube-api-access-xmkn7\") on node \"crc\" DevicePath \"\"" Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.677018 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" event={"ID":"b35a5adf-48a7-4e39-9491-c45f9b71b9b7","Type":"ContainerDied","Data":"e53b1701631ba7af6f67f7d13168fecb912ef607a26a7cebe118618059cda574"} Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.677060 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e53b1701631ba7af6f67f7d13168fecb912ef607a26a7cebe118618059cda574" Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.677123 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536934-qjmvw" Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.713876 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536934-qjmvw"] Feb 27 18:30:25 crc kubenswrapper[4708]: I0227 18:30:25.729003 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536934-qjmvw"] Feb 27 18:30:26 crc kubenswrapper[4708]: I0227 18:30:26.245957 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" path="/var/lib/kubelet/pods/b35a5adf-48a7-4e39-9491-c45f9b71b9b7/volumes" Feb 27 18:30:32 crc kubenswrapper[4708]: E0227 18:30:32.235794 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:30:35 crc kubenswrapper[4708]: I0227 18:30:35.631238 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:30:35 crc kubenswrapper[4708]: I0227 18:30:35.631572 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:30:43 crc kubenswrapper[4708]: E0227 18:30:43.231582 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:30:55 crc kubenswrapper[4708]: E0227 18:30:55.232036 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:31:05 crc kubenswrapper[4708]: I0227 18:31:05.631944 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:31:05 crc kubenswrapper[4708]: I0227 18:31:05.632523 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:31:05 crc kubenswrapper[4708]: I0227 18:31:05.632565 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:31:05 crc kubenswrapper[4708]: I0227 18:31:05.633505 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:31:05 crc kubenswrapper[4708]: I0227 18:31:05.633573 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" gracePeriod=600 Feb 27 18:31:05 crc kubenswrapper[4708]: E0227 18:31:05.755345 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:31:05 crc kubenswrapper[4708]: I0227 18:31:05.778751 4708 scope.go:117] "RemoveContainer" containerID="5db0772007d50353bc7c4bf4e1949764322c23eade2a997d7df25912d81b26b3" Feb 27 18:31:05 crc kubenswrapper[4708]: I0227 18:31:05.854062 4708 scope.go:117] "RemoveContainer" containerID="f95088672443750716d6d84d43246519db12cd69eda5db689aa522b622b2fe7f" Feb 27 18:31:06 crc kubenswrapper[4708]: I0227 18:31:06.138271 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" exitCode=0 Feb 27 18:31:06 crc kubenswrapper[4708]: I0227 18:31:06.138335 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc"} Feb 27 18:31:06 crc kubenswrapper[4708]: I0227 18:31:06.138394 4708 scope.go:117] "RemoveContainer" containerID="bc88558550d87eae3a512b21fd1a12e6ff0ab0f0676c9f1b1877d03be6f078fe" Feb 27 18:31:06 crc kubenswrapper[4708]: I0227 18:31:06.139525 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:31:06 crc kubenswrapper[4708]: E0227 18:31:06.140077 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:31:10 crc kubenswrapper[4708]: E0227 18:31:10.233388 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:31:18 crc kubenswrapper[4708]: I0227 18:31:18.229218 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:31:18 crc kubenswrapper[4708]: E0227 18:31:18.230310 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:31:21 crc kubenswrapper[4708]: E0227 18:31:21.231605 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:31:31 crc kubenswrapper[4708]: I0227 18:31:31.229120 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:31:31 crc kubenswrapper[4708]: E0227 18:31:31.230261 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:31:34 crc kubenswrapper[4708]: E0227 18:31:34.232982 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:31:45 crc kubenswrapper[4708]: I0227 18:31:45.229255 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:31:45 crc kubenswrapper[4708]: E0227 18:31:45.230511 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:31:49 crc kubenswrapper[4708]: E0227 18:31:49.231967 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.166450 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536952-vhfxl"] Feb 27 18:32:00 crc kubenswrapper[4708]: E0227 18:32:00.167986 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" containerName="oc" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.168004 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" containerName="oc" Feb 27 18:32:00 crc kubenswrapper[4708]: E0227 18:32:00.168026 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="812d28f8-6380-4708-8aaf-cc2d7f91c736" containerName="oc" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.168032 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="812d28f8-6380-4708-8aaf-cc2d7f91c736" containerName="oc" Feb 27 18:32:00 crc kubenswrapper[4708]: E0227 18:32:00.168044 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2923d922-34e8-425a-9e01-131e2863d638" containerName="collect-profiles" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.168051 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2923d922-34e8-425a-9e01-131e2863d638" containerName="collect-profiles" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.168305 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2923d922-34e8-425a-9e01-131e2863d638" containerName="collect-profiles" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.168317 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b35a5adf-48a7-4e39-9491-c45f9b71b9b7" containerName="oc" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.168330 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="812d28f8-6380-4708-8aaf-cc2d7f91c736" containerName="oc" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.169161 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.190469 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536952-vhfxl"] Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.230746 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:32:00 crc kubenswrapper[4708]: E0227 18:32:00.231113 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:32:00 crc kubenswrapper[4708]: E0227 18:32:00.233822 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.365550 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pxrr\" (UniqueName: \"kubernetes.io/projected/dbff0a68-717b-4cac-a965-877ac0ba1767-kube-api-access-8pxrr\") pod \"auto-csr-approver-29536952-vhfxl\" (UID: \"dbff0a68-717b-4cac-a965-877ac0ba1767\") " pod="openshift-infra/auto-csr-approver-29536952-vhfxl" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.469442 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8pxrr\" (UniqueName: \"kubernetes.io/projected/dbff0a68-717b-4cac-a965-877ac0ba1767-kube-api-access-8pxrr\") pod \"auto-csr-approver-29536952-vhfxl\" (UID: \"dbff0a68-717b-4cac-a965-877ac0ba1767\") " pod="openshift-infra/auto-csr-approver-29536952-vhfxl" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.528986 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8pxrr\" (UniqueName: \"kubernetes.io/projected/dbff0a68-717b-4cac-a965-877ac0ba1767-kube-api-access-8pxrr\") pod \"auto-csr-approver-29536952-vhfxl\" (UID: \"dbff0a68-717b-4cac-a965-877ac0ba1767\") " pod="openshift-infra/auto-csr-approver-29536952-vhfxl" Feb 27 18:32:00 crc kubenswrapper[4708]: I0227 18:32:00.799300 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" Feb 27 18:32:01 crc kubenswrapper[4708]: I0227 18:32:01.303914 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536952-vhfxl"] Feb 27 18:32:01 crc kubenswrapper[4708]: I0227 18:32:01.784071 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" event={"ID":"dbff0a68-717b-4cac-a965-877ac0ba1767","Type":"ContainerStarted","Data":"16528c8b1f276bbc7706869fdabbd54910d89d037a77291433f2c0cf8669ef02"} Feb 27 18:32:02 crc kubenswrapper[4708]: E0227 18:32:02.258838 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 18:32:02 crc kubenswrapper[4708]: E0227 18:32:02.259229 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 18:32:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 18:32:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8pxrr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29536952-vhfxl_openshift-infra(dbff0a68-717b-4cac-a965-877ac0ba1767): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 18:32:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 18:32:02 crc kubenswrapper[4708]: E0227 18:32:02.264448 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" podUID="dbff0a68-717b-4cac-a965-877ac0ba1767" Feb 27 18:32:02 crc kubenswrapper[4708]: E0227 18:32:02.799287 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" podUID="dbff0a68-717b-4cac-a965-877ac0ba1767" Feb 27 18:32:15 crc kubenswrapper[4708]: I0227 18:32:15.229366 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:32:15 crc kubenswrapper[4708]: E0227 18:32:15.230447 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:32:15 crc kubenswrapper[4708]: E0227 18:32:15.234000 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:32:27 crc kubenswrapper[4708]: E0227 18:32:27.232690 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:32:28 crc kubenswrapper[4708]: I0227 18:32:28.229920 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:32:28 crc kubenswrapper[4708]: E0227 18:32:28.230826 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:32:39 crc kubenswrapper[4708]: I0227 18:32:39.229092 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:32:39 crc kubenswrapper[4708]: E0227 18:32:39.230380 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:32:40 crc kubenswrapper[4708]: I0227 18:32:40.253726 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" event={"ID":"dbff0a68-717b-4cac-a965-877ac0ba1767","Type":"ContainerStarted","Data":"902e8f51f1b155399d3a2a8d58cb27863f638772cfe18a4137c9a1f5e6a0dae9"} Feb 27 18:32:40 crc kubenswrapper[4708]: I0227 18:32:40.270505 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" podStartSLOduration=1.738030788 podStartE2EDuration="40.270483065s" podCreationTimestamp="2026-02-27 18:32:00 +0000 UTC" firstStartedPulling="2026-02-27 18:32:01.307929879 +0000 UTC m=+5919.823727466" lastFinishedPulling="2026-02-27 18:32:39.840382116 +0000 UTC m=+5958.356179743" observedRunningTime="2026-02-27 18:32:40.266491332 +0000 UTC m=+5958.782288919" watchObservedRunningTime="2026-02-27 18:32:40.270483065 +0000 UTC m=+5958.786280652" Feb 27 18:32:41 crc kubenswrapper[4708]: E0227 18:32:41.230060 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:32:41 crc kubenswrapper[4708]: I0227 18:32:41.265960 4708 generic.go:334] "Generic (PLEG): container finished" podID="dbff0a68-717b-4cac-a965-877ac0ba1767" containerID="902e8f51f1b155399d3a2a8d58cb27863f638772cfe18a4137c9a1f5e6a0dae9" exitCode=0 Feb 27 18:32:41 crc kubenswrapper[4708]: I0227 18:32:41.266028 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" event={"ID":"dbff0a68-717b-4cac-a965-877ac0ba1767","Type":"ContainerDied","Data":"902e8f51f1b155399d3a2a8d58cb27863f638772cfe18a4137c9a1f5e6a0dae9"} Feb 27 18:32:42 crc kubenswrapper[4708]: I0227 18:32:42.787614 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" Feb 27 18:32:42 crc kubenswrapper[4708]: I0227 18:32:42.878010 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pxrr\" (UniqueName: \"kubernetes.io/projected/dbff0a68-717b-4cac-a965-877ac0ba1767-kube-api-access-8pxrr\") pod \"dbff0a68-717b-4cac-a965-877ac0ba1767\" (UID: \"dbff0a68-717b-4cac-a965-877ac0ba1767\") " Feb 27 18:32:42 crc kubenswrapper[4708]: I0227 18:32:42.886336 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbff0a68-717b-4cac-a965-877ac0ba1767-kube-api-access-8pxrr" (OuterVolumeSpecName: "kube-api-access-8pxrr") pod "dbff0a68-717b-4cac-a965-877ac0ba1767" (UID: "dbff0a68-717b-4cac-a965-877ac0ba1767"). InnerVolumeSpecName "kube-api-access-8pxrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:32:42 crc kubenswrapper[4708]: I0227 18:32:42.981298 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8pxrr\" (UniqueName: \"kubernetes.io/projected/dbff0a68-717b-4cac-a965-877ac0ba1767-kube-api-access-8pxrr\") on node \"crc\" DevicePath \"\"" Feb 27 18:32:43 crc kubenswrapper[4708]: I0227 18:32:43.289367 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" event={"ID":"dbff0a68-717b-4cac-a965-877ac0ba1767","Type":"ContainerDied","Data":"16528c8b1f276bbc7706869fdabbd54910d89d037a77291433f2c0cf8669ef02"} Feb 27 18:32:43 crc kubenswrapper[4708]: I0227 18:32:43.289429 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16528c8b1f276bbc7706869fdabbd54910d89d037a77291433f2c0cf8669ef02" Feb 27 18:32:43 crc kubenswrapper[4708]: I0227 18:32:43.289448 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536952-vhfxl" Feb 27 18:32:43 crc kubenswrapper[4708]: I0227 18:32:43.361716 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536946-t9np9"] Feb 27 18:32:43 crc kubenswrapper[4708]: I0227 18:32:43.372892 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536946-t9np9"] Feb 27 18:32:44 crc kubenswrapper[4708]: I0227 18:32:44.246528 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f53f4f25-29d2-43e2-b655-7389d6656a4e" path="/var/lib/kubelet/pods/f53f4f25-29d2-43e2-b655-7389d6656a4e/volumes" Feb 27 18:32:55 crc kubenswrapper[4708]: I0227 18:32:55.228636 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:32:55 crc kubenswrapper[4708]: E0227 18:32:55.229456 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:32:56 crc kubenswrapper[4708]: E0227 18:32:56.232432 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:33:05 crc kubenswrapper[4708]: I0227 18:33:05.959146 4708 scope.go:117] "RemoveContainer" containerID="42b7495c4aba69fe83dcce51d43616a668753410bf771f12d1dca18f56114285" Feb 27 18:33:08 crc kubenswrapper[4708]: I0227 18:33:08.228734 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:33:08 crc kubenswrapper[4708]: E0227 18:33:08.229888 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:33:10 crc kubenswrapper[4708]: E0227 18:33:10.233980 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:33:19 crc kubenswrapper[4708]: I0227 18:33:19.231613 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:33:19 crc kubenswrapper[4708]: E0227 18:33:19.232792 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:33:21 crc kubenswrapper[4708]: E0227 18:33:21.230386 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:33:32 crc kubenswrapper[4708]: I0227 18:33:32.241623 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:33:32 crc kubenswrapper[4708]: E0227 18:33:32.242907 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:33:32 crc kubenswrapper[4708]: E0227 18:33:32.245299 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:33:43 crc kubenswrapper[4708]: E0227 18:33:43.238310 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:33:44 crc kubenswrapper[4708]: I0227 18:33:44.230406 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:33:44 crc kubenswrapper[4708]: E0227 18:33:44.230833 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:33:56 crc kubenswrapper[4708]: E0227 18:33:56.230264 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:33:59 crc kubenswrapper[4708]: I0227 18:33:59.229659 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:33:59 crc kubenswrapper[4708]: E0227 18:33:59.230452 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.161693 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536954-r8xf4"] Feb 27 18:34:00 crc kubenswrapper[4708]: E0227 18:34:00.162592 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbff0a68-717b-4cac-a965-877ac0ba1767" containerName="oc" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.162609 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbff0a68-717b-4cac-a965-877ac0ba1767" containerName="oc" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.162914 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbff0a68-717b-4cac-a965-877ac0ba1767" containerName="oc" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.163752 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536954-r8xf4" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.186304 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536954-r8xf4"] Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.316791 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c42sl\" (UniqueName: \"kubernetes.io/projected/23389bc5-d111-440f-ba89-725fe3946947-kube-api-access-c42sl\") pod \"auto-csr-approver-29536954-r8xf4\" (UID: \"23389bc5-d111-440f-ba89-725fe3946947\") " pod="openshift-infra/auto-csr-approver-29536954-r8xf4" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.419430 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c42sl\" (UniqueName: \"kubernetes.io/projected/23389bc5-d111-440f-ba89-725fe3946947-kube-api-access-c42sl\") pod \"auto-csr-approver-29536954-r8xf4\" (UID: \"23389bc5-d111-440f-ba89-725fe3946947\") " pod="openshift-infra/auto-csr-approver-29536954-r8xf4" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.452772 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c42sl\" (UniqueName: \"kubernetes.io/projected/23389bc5-d111-440f-ba89-725fe3946947-kube-api-access-c42sl\") pod \"auto-csr-approver-29536954-r8xf4\" (UID: \"23389bc5-d111-440f-ba89-725fe3946947\") " pod="openshift-infra/auto-csr-approver-29536954-r8xf4" Feb 27 18:34:00 crc kubenswrapper[4708]: I0227 18:34:00.497705 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536954-r8xf4" Feb 27 18:34:01 crc kubenswrapper[4708]: I0227 18:34:01.069311 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536954-r8xf4"] Feb 27 18:34:01 crc kubenswrapper[4708]: I0227 18:34:01.330174 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536954-r8xf4" event={"ID":"23389bc5-d111-440f-ba89-725fe3946947","Type":"ContainerStarted","Data":"5d095fa864a7b56ccb53f5d347546225a765fcd03a8d68590f3fb974561d4355"} Feb 27 18:34:03 crc kubenswrapper[4708]: I0227 18:34:03.358726 4708 generic.go:334] "Generic (PLEG): container finished" podID="23389bc5-d111-440f-ba89-725fe3946947" containerID="e28f2293e155d0e0293cf9150861ccb4bc4029e6c83e6773bba46c098d5efe3d" exitCode=0 Feb 27 18:34:03 crc kubenswrapper[4708]: I0227 18:34:03.358899 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536954-r8xf4" event={"ID":"23389bc5-d111-440f-ba89-725fe3946947","Type":"ContainerDied","Data":"e28f2293e155d0e0293cf9150861ccb4bc4029e6c83e6773bba46c098d5efe3d"} Feb 27 18:34:04 crc kubenswrapper[4708]: I0227 18:34:04.902729 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536954-r8xf4" Feb 27 18:34:05 crc kubenswrapper[4708]: I0227 18:34:05.065757 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c42sl\" (UniqueName: \"kubernetes.io/projected/23389bc5-d111-440f-ba89-725fe3946947-kube-api-access-c42sl\") pod \"23389bc5-d111-440f-ba89-725fe3946947\" (UID: \"23389bc5-d111-440f-ba89-725fe3946947\") " Feb 27 18:34:05 crc kubenswrapper[4708]: I0227 18:34:05.076945 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23389bc5-d111-440f-ba89-725fe3946947-kube-api-access-c42sl" (OuterVolumeSpecName: "kube-api-access-c42sl") pod "23389bc5-d111-440f-ba89-725fe3946947" (UID: "23389bc5-d111-440f-ba89-725fe3946947"). InnerVolumeSpecName "kube-api-access-c42sl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:34:05 crc kubenswrapper[4708]: I0227 18:34:05.169877 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c42sl\" (UniqueName: \"kubernetes.io/projected/23389bc5-d111-440f-ba89-725fe3946947-kube-api-access-c42sl\") on node \"crc\" DevicePath \"\"" Feb 27 18:34:05 crc kubenswrapper[4708]: I0227 18:34:05.390815 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536954-r8xf4" event={"ID":"23389bc5-d111-440f-ba89-725fe3946947","Type":"ContainerDied","Data":"5d095fa864a7b56ccb53f5d347546225a765fcd03a8d68590f3fb974561d4355"} Feb 27 18:34:05 crc kubenswrapper[4708]: I0227 18:34:05.390924 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536954-r8xf4" Feb 27 18:34:05 crc kubenswrapper[4708]: I0227 18:34:05.390928 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d095fa864a7b56ccb53f5d347546225a765fcd03a8d68590f3fb974561d4355" Feb 27 18:34:06 crc kubenswrapper[4708]: I0227 18:34:06.009593 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536948-fq4r9"] Feb 27 18:34:06 crc kubenswrapper[4708]: I0227 18:34:06.019255 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536948-fq4r9"] Feb 27 18:34:06 crc kubenswrapper[4708]: I0227 18:34:06.048443 4708 scope.go:117] "RemoveContainer" containerID="b3a6d6de26d2299836a77d8474214051d20d3a4b3f02f5b60369a23fbbfd16c7" Feb 27 18:34:06 crc kubenswrapper[4708]: I0227 18:34:06.246545 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e786fec6-0250-4b8d-8a37-63395236230b" path="/var/lib/kubelet/pods/e786fec6-0250-4b8d-8a37-63395236230b/volumes" Feb 27 18:34:10 crc kubenswrapper[4708]: E0227 18:34:10.234122 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:34:11 crc kubenswrapper[4708]: I0227 18:34:11.229792 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:34:11 crc kubenswrapper[4708]: E0227 18:34:11.230792 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:34:22 crc kubenswrapper[4708]: E0227 18:34:22.246483 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" Feb 27 18:34:24 crc kubenswrapper[4708]: I0227 18:34:24.229364 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:34:24 crc kubenswrapper[4708]: E0227 18:34:24.230227 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:34:36 crc kubenswrapper[4708]: I0227 18:34:36.230313 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:34:36 crc kubenswrapper[4708]: E0227 18:34:36.231438 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:34:36 crc kubenswrapper[4708]: I0227 18:34:36.233609 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:34:37 crc kubenswrapper[4708]: I0227 18:34:37.874959 4708 generic.go:334] "Generic (PLEG): container finished" podID="7be693cf-322d-4ac9-b66c-35a281510ef4" containerID="137aaff0b8cbf1d884028fdabaa9794a49a81c177b74175e2d479df8c3693455" exitCode=0 Feb 27 18:34:37 crc kubenswrapper[4708]: I0227 18:34:37.875064 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" event={"ID":"7be693cf-322d-4ac9-b66c-35a281510ef4","Type":"ContainerDied","Data":"137aaff0b8cbf1d884028fdabaa9794a49a81c177b74175e2d479df8c3693455"} Feb 27 18:34:39 crc kubenswrapper[4708]: I0227 18:34:39.373521 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" Feb 27 18:34:39 crc kubenswrapper[4708]: I0227 18:34:39.418973 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb8pv\" (UniqueName: \"kubernetes.io/projected/7be693cf-322d-4ac9-b66c-35a281510ef4-kube-api-access-tb8pv\") pod \"7be693cf-322d-4ac9-b66c-35a281510ef4\" (UID: \"7be693cf-322d-4ac9-b66c-35a281510ef4\") " Feb 27 18:34:39 crc kubenswrapper[4708]: I0227 18:34:39.427525 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7be693cf-322d-4ac9-b66c-35a281510ef4-kube-api-access-tb8pv" (OuterVolumeSpecName: "kube-api-access-tb8pv") pod "7be693cf-322d-4ac9-b66c-35a281510ef4" (UID: "7be693cf-322d-4ac9-b66c-35a281510ef4"). InnerVolumeSpecName "kube-api-access-tb8pv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:34:39 crc kubenswrapper[4708]: I0227 18:34:39.522146 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb8pv\" (UniqueName: \"kubernetes.io/projected/7be693cf-322d-4ac9-b66c-35a281510ef4-kube-api-access-tb8pv\") on node \"crc\" DevicePath \"\"" Feb 27 18:34:39 crc kubenswrapper[4708]: I0227 18:34:39.901768 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" event={"ID":"7be693cf-322d-4ac9-b66c-35a281510ef4","Type":"ContainerDied","Data":"f170b41141676aa2cfd601797bece4bc18a7259afb9806d614ad9ef5fb551ade"} Feb 27 18:34:39 crc kubenswrapper[4708]: I0227 18:34:39.901832 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f170b41141676aa2cfd601797bece4bc18a7259afb9806d614ad9ef5fb551ade" Feb 27 18:34:39 crc kubenswrapper[4708]: I0227 18:34:39.901930 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536928-k2dpc" Feb 27 18:34:40 crc kubenswrapper[4708]: I0227 18:34:40.463253 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-k2dpc"] Feb 27 18:34:40 crc kubenswrapper[4708]: I0227 18:34:40.475807 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536928-k2dpc"] Feb 27 18:34:42 crc kubenswrapper[4708]: I0227 18:34:42.248838 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" path="/var/lib/kubelet/pods/7be693cf-322d-4ac9-b66c-35a281510ef4/volumes" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.818400 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-svp7k"] Feb 27 18:34:45 crc kubenswrapper[4708]: E0227 18:34:45.820067 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23389bc5-d111-440f-ba89-725fe3946947" containerName="oc" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.820154 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="23389bc5-d111-440f-ba89-725fe3946947" containerName="oc" Feb 27 18:34:45 crc kubenswrapper[4708]: E0227 18:34:45.820214 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" containerName="oc" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.820229 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" containerName="oc" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.820618 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7be693cf-322d-4ac9-b66c-35a281510ef4" containerName="oc" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.820702 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="23389bc5-d111-440f-ba89-725fe3946947" containerName="oc" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.823684 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.843638 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svp7k"] Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.983748 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-utilities\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.984000 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n59rn\" (UniqueName: \"kubernetes.io/projected/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-kube-api-access-n59rn\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:45 crc kubenswrapper[4708]: I0227 18:34:45.984154 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-catalog-content\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.085695 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-utilities\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.085807 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n59rn\" (UniqueName: \"kubernetes.io/projected/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-kube-api-access-n59rn\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.085859 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-catalog-content\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.086195 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-utilities\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.086257 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-catalog-content\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.112185 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n59rn\" (UniqueName: \"kubernetes.io/projected/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-kube-api-access-n59rn\") pod \"redhat-operators-svp7k\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.160278 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.627552 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-svp7k"] Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.990767 4708 generic.go:334] "Generic (PLEG): container finished" podID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerID="32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba" exitCode=0 Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.990817 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svp7k" event={"ID":"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe","Type":"ContainerDied","Data":"32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba"} Feb 27 18:34:46 crc kubenswrapper[4708]: I0227 18:34:46.990864 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svp7k" event={"ID":"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe","Type":"ContainerStarted","Data":"2f12e05e41881ae1058e7115168cb7dc30f03da066d220b387bbfda7ffd87f4c"} Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.604255 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g4p94"] Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.606580 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.640068 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4p94"] Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.722338 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-utilities\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.722424 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9zx5\" (UniqueName: \"kubernetes.io/projected/7dd680ab-5ac1-4191-b892-05dcdae323b1-kube-api-access-j9zx5\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.722569 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-catalog-content\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.824791 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-catalog-content\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.824940 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-utilities\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.824981 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j9zx5\" (UniqueName: \"kubernetes.io/projected/7dd680ab-5ac1-4191-b892-05dcdae323b1-kube-api-access-j9zx5\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.825261 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-catalog-content\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.825822 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-utilities\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.849172 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j9zx5\" (UniqueName: \"kubernetes.io/projected/7dd680ab-5ac1-4191-b892-05dcdae323b1-kube-api-access-j9zx5\") pod \"redhat-marketplace-g4p94\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:47 crc kubenswrapper[4708]: I0227 18:34:47.929613 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:48 crc kubenswrapper[4708]: E0227 18:34:48.015756 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:34:48 crc kubenswrapper[4708]: E0227 18:34:48.015937 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n59rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-svp7k_openshift-marketplace(445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:34:48 crc kubenswrapper[4708]: E0227 18:34:48.017140 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-svp7k" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" Feb 27 18:34:48 crc kubenswrapper[4708]: I0227 18:34:48.388417 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4p94"] Feb 27 18:34:48 crc kubenswrapper[4708]: W0227 18:34:48.389163 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7dd680ab_5ac1_4191_b892_05dcdae323b1.slice/crio-64ab47f9f0014063b559291c5b9188bb10d27eaaeadd8b092858af95cdd893c1 WatchSource:0}: Error finding container 64ab47f9f0014063b559291c5b9188bb10d27eaaeadd8b092858af95cdd893c1: Status 404 returned error can't find the container with id 64ab47f9f0014063b559291c5b9188bb10d27eaaeadd8b092858af95cdd893c1 Feb 27 18:34:49 crc kubenswrapper[4708]: I0227 18:34:49.016656 4708 generic.go:334] "Generic (PLEG): container finished" podID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerID="be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434" exitCode=0 Feb 27 18:34:49 crc kubenswrapper[4708]: I0227 18:34:49.016756 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4p94" event={"ID":"7dd680ab-5ac1-4191-b892-05dcdae323b1","Type":"ContainerDied","Data":"be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434"} Feb 27 18:34:49 crc kubenswrapper[4708]: I0227 18:34:49.017177 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4p94" event={"ID":"7dd680ab-5ac1-4191-b892-05dcdae323b1","Type":"ContainerStarted","Data":"64ab47f9f0014063b559291c5b9188bb10d27eaaeadd8b092858af95cdd893c1"} Feb 27 18:34:49 crc kubenswrapper[4708]: E0227 18:34:49.020790 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-svp7k" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" Feb 27 18:34:49 crc kubenswrapper[4708]: I0227 18:34:49.228477 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:34:49 crc kubenswrapper[4708]: E0227 18:34:49.228810 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:34:50 crc kubenswrapper[4708]: I0227 18:34:50.031469 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4p94" event={"ID":"7dd680ab-5ac1-4191-b892-05dcdae323b1","Type":"ContainerStarted","Data":"317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3"} Feb 27 18:34:51 crc kubenswrapper[4708]: I0227 18:34:51.044029 4708 generic.go:334] "Generic (PLEG): container finished" podID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerID="317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3" exitCode=0 Feb 27 18:34:51 crc kubenswrapper[4708]: I0227 18:34:51.044095 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4p94" event={"ID":"7dd680ab-5ac1-4191-b892-05dcdae323b1","Type":"ContainerDied","Data":"317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3"} Feb 27 18:34:52 crc kubenswrapper[4708]: I0227 18:34:52.059385 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4p94" event={"ID":"7dd680ab-5ac1-4191-b892-05dcdae323b1","Type":"ContainerStarted","Data":"233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802"} Feb 27 18:34:52 crc kubenswrapper[4708]: I0227 18:34:52.103925 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g4p94" podStartSLOduration=2.621070498 podStartE2EDuration="5.103896054s" podCreationTimestamp="2026-02-27 18:34:47 +0000 UTC" firstStartedPulling="2026-02-27 18:34:49.01942679 +0000 UTC m=+6087.535224417" lastFinishedPulling="2026-02-27 18:34:51.502252366 +0000 UTC m=+6090.018049973" observedRunningTime="2026-02-27 18:34:52.087598353 +0000 UTC m=+6090.603396020" watchObservedRunningTime="2026-02-27 18:34:52.103896054 +0000 UTC m=+6090.619693681" Feb 27 18:34:57 crc kubenswrapper[4708]: I0227 18:34:57.930253 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:57 crc kubenswrapper[4708]: I0227 18:34:57.930817 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:57 crc kubenswrapper[4708]: I0227 18:34:57.992880 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:58 crc kubenswrapper[4708]: I0227 18:34:58.262861 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:34:58 crc kubenswrapper[4708]: I0227 18:34:58.311881 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4p94"] Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.152527 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g4p94" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="registry-server" containerID="cri-o://233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802" gracePeriod=2 Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.229074 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:35:00 crc kubenswrapper[4708]: E0227 18:35:00.229551 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.730301 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.838987 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9zx5\" (UniqueName: \"kubernetes.io/projected/7dd680ab-5ac1-4191-b892-05dcdae323b1-kube-api-access-j9zx5\") pod \"7dd680ab-5ac1-4191-b892-05dcdae323b1\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.839314 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-utilities\") pod \"7dd680ab-5ac1-4191-b892-05dcdae323b1\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.839399 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-catalog-content\") pod \"7dd680ab-5ac1-4191-b892-05dcdae323b1\" (UID: \"7dd680ab-5ac1-4191-b892-05dcdae323b1\") " Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.840126 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-utilities" (OuterVolumeSpecName: "utilities") pod "7dd680ab-5ac1-4191-b892-05dcdae323b1" (UID: "7dd680ab-5ac1-4191-b892-05dcdae323b1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.847295 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7dd680ab-5ac1-4191-b892-05dcdae323b1-kube-api-access-j9zx5" (OuterVolumeSpecName: "kube-api-access-j9zx5") pod "7dd680ab-5ac1-4191-b892-05dcdae323b1" (UID: "7dd680ab-5ac1-4191-b892-05dcdae323b1"). InnerVolumeSpecName "kube-api-access-j9zx5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.874769 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7dd680ab-5ac1-4191-b892-05dcdae323b1" (UID: "7dd680ab-5ac1-4191-b892-05dcdae323b1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:35:00 crc kubenswrapper[4708]: E0227 18:35:00.917122 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 18:35:00 crc kubenswrapper[4708]: E0227 18:35:00.917601 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n59rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-svp7k_openshift-marketplace(445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 18:35:00 crc kubenswrapper[4708]: E0227 18:35:00.918974 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-svp7k" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.942082 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.942116 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7dd680ab-5ac1-4191-b892-05dcdae323b1-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:35:00 crc kubenswrapper[4708]: I0227 18:35:00.942131 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j9zx5\" (UniqueName: \"kubernetes.io/projected/7dd680ab-5ac1-4191-b892-05dcdae323b1-kube-api-access-j9zx5\") on node \"crc\" DevicePath \"\"" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.167523 4708 generic.go:334] "Generic (PLEG): container finished" podID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerID="233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802" exitCode=0 Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.167582 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4p94" event={"ID":"7dd680ab-5ac1-4191-b892-05dcdae323b1","Type":"ContainerDied","Data":"233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802"} Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.167613 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g4p94" event={"ID":"7dd680ab-5ac1-4191-b892-05dcdae323b1","Type":"ContainerDied","Data":"64ab47f9f0014063b559291c5b9188bb10d27eaaeadd8b092858af95cdd893c1"} Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.167611 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g4p94" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.167633 4708 scope.go:117] "RemoveContainer" containerID="233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.204009 4708 scope.go:117] "RemoveContainer" containerID="317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.221348 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4p94"] Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.239739 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g4p94"] Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.246985 4708 scope.go:117] "RemoveContainer" containerID="be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.298726 4708 scope.go:117] "RemoveContainer" containerID="233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802" Feb 27 18:35:01 crc kubenswrapper[4708]: E0227 18:35:01.299165 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802\": container with ID starting with 233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802 not found: ID does not exist" containerID="233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.299244 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802"} err="failed to get container status \"233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802\": rpc error: code = NotFound desc = could not find container \"233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802\": container with ID starting with 233bfcaa24cabb8759758e5aa1e5f309dfbd06e46ce640fca1ec1e4f5f688802 not found: ID does not exist" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.299282 4708 scope.go:117] "RemoveContainer" containerID="317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3" Feb 27 18:35:01 crc kubenswrapper[4708]: E0227 18:35:01.300406 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3\": container with ID starting with 317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3 not found: ID does not exist" containerID="317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.300453 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3"} err="failed to get container status \"317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3\": rpc error: code = NotFound desc = could not find container \"317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3\": container with ID starting with 317ca35ba4006505b72887308efb78bd66b8c6623417ceb4c0ce080b4d6c89b3 not found: ID does not exist" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.300487 4708 scope.go:117] "RemoveContainer" containerID="be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434" Feb 27 18:35:01 crc kubenswrapper[4708]: E0227 18:35:01.301050 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434\": container with ID starting with be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434 not found: ID does not exist" containerID="be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434" Feb 27 18:35:01 crc kubenswrapper[4708]: I0227 18:35:01.301096 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434"} err="failed to get container status \"be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434\": rpc error: code = NotFound desc = could not find container \"be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434\": container with ID starting with be7df92e0d8e484ac56ddd09c01763673762020ca18be207009e773f99e77434 not found: ID does not exist" Feb 27 18:35:02 crc kubenswrapper[4708]: I0227 18:35:02.247285 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" path="/var/lib/kubelet/pods/7dd680ab-5ac1-4191-b892-05dcdae323b1/volumes" Feb 27 18:35:06 crc kubenswrapper[4708]: I0227 18:35:06.142218 4708 scope.go:117] "RemoveContainer" containerID="3ba0bfeeb3d331982f287a2f1dbaf9458f66246784570abd3186709edb49bf1a" Feb 27 18:35:06 crc kubenswrapper[4708]: I0227 18:35:06.181480 4708 scope.go:117] "RemoveContainer" containerID="73d34910668ff4c577abe1dc6605e16ca2a7f555e75846b4a2a6f4e2fa696211" Feb 27 18:35:14 crc kubenswrapper[4708]: E0227 18:35:14.231185 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-svp7k" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" Feb 27 18:35:15 crc kubenswrapper[4708]: I0227 18:35:15.228946 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:35:15 crc kubenswrapper[4708]: E0227 18:35:15.229589 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:35:27 crc kubenswrapper[4708]: I0227 18:35:27.229706 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:35:27 crc kubenswrapper[4708]: E0227 18:35:27.230914 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:35:29 crc kubenswrapper[4708]: I0227 18:35:29.527691 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svp7k" event={"ID":"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe","Type":"ContainerStarted","Data":"6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9"} Feb 27 18:35:33 crc kubenswrapper[4708]: I0227 18:35:33.592228 4708 generic.go:334] "Generic (PLEG): container finished" podID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerID="6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9" exitCode=0 Feb 27 18:35:33 crc kubenswrapper[4708]: I0227 18:35:33.592295 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svp7k" event={"ID":"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe","Type":"ContainerDied","Data":"6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9"} Feb 27 18:35:34 crc kubenswrapper[4708]: I0227 18:35:34.637201 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svp7k" event={"ID":"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe","Type":"ContainerStarted","Data":"9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a"} Feb 27 18:35:34 crc kubenswrapper[4708]: I0227 18:35:34.672351 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-svp7k" podStartSLOduration=2.565459209 podStartE2EDuration="49.672331514s" podCreationTimestamp="2026-02-27 18:34:45 +0000 UTC" firstStartedPulling="2026-02-27 18:34:46.992926864 +0000 UTC m=+6085.508724461" lastFinishedPulling="2026-02-27 18:35:34.099799179 +0000 UTC m=+6132.615596766" observedRunningTime="2026-02-27 18:35:34.670112851 +0000 UTC m=+6133.185910438" watchObservedRunningTime="2026-02-27 18:35:34.672331514 +0000 UTC m=+6133.188129101" Feb 27 18:35:36 crc kubenswrapper[4708]: I0227 18:35:36.161091 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:35:36 crc kubenswrapper[4708]: I0227 18:35:36.161389 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:35:37 crc kubenswrapper[4708]: I0227 18:35:37.228527 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-svp7k" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="registry-server" probeResult="failure" output=< Feb 27 18:35:37 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:35:37 crc kubenswrapper[4708]: > Feb 27 18:35:41 crc kubenswrapper[4708]: I0227 18:35:41.229093 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:35:41 crc kubenswrapper[4708]: E0227 18:35:41.230033 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:35:46 crc kubenswrapper[4708]: I0227 18:35:46.252220 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:35:46 crc kubenswrapper[4708]: I0227 18:35:46.334160 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:35:47 crc kubenswrapper[4708]: I0227 18:35:47.048642 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svp7k"] Feb 27 18:35:47 crc kubenswrapper[4708]: I0227 18:35:47.330240 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-svp7k" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="registry-server" containerID="cri-o://9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a" gracePeriod=2 Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.092364 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.283351 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-utilities\") pod \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.283954 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-catalog-content\") pod \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.284047 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n59rn\" (UniqueName: \"kubernetes.io/projected/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-kube-api-access-n59rn\") pod \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\" (UID: \"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe\") " Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.285066 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-utilities" (OuterVolumeSpecName: "utilities") pod "445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" (UID: "445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.293819 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-kube-api-access-n59rn" (OuterVolumeSpecName: "kube-api-access-n59rn") pod "445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" (UID: "445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe"). InnerVolumeSpecName "kube-api-access-n59rn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.346447 4708 generic.go:334] "Generic (PLEG): container finished" podID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerID="9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a" exitCode=0 Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.346584 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-svp7k" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.347616 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svp7k" event={"ID":"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe","Type":"ContainerDied","Data":"9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a"} Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.347788 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-svp7k" event={"ID":"445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe","Type":"ContainerDied","Data":"2f12e05e41881ae1058e7115168cb7dc30f03da066d220b387bbfda7ffd87f4c"} Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.347877 4708 scope.go:117] "RemoveContainer" containerID="9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.389201 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n59rn\" (UniqueName: \"kubernetes.io/projected/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-kube-api-access-n59rn\") on node \"crc\" DevicePath \"\"" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.389272 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.406711 4708 scope.go:117] "RemoveContainer" containerID="6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.438824 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" (UID: "445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.444061 4708 scope.go:117] "RemoveContainer" containerID="32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.492117 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.494440 4708 scope.go:117] "RemoveContainer" containerID="9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a" Feb 27 18:35:48 crc kubenswrapper[4708]: E0227 18:35:48.495176 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a\": container with ID starting with 9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a not found: ID does not exist" containerID="9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.495226 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a"} err="failed to get container status \"9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a\": rpc error: code = NotFound desc = could not find container \"9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a\": container with ID starting with 9609507b50ce26d39ebca2277f0b623d952b97feaaa0804525bcdf45ec5e565a not found: ID does not exist" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.495262 4708 scope.go:117] "RemoveContainer" containerID="6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9" Feb 27 18:35:48 crc kubenswrapper[4708]: E0227 18:35:48.495934 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9\": container with ID starting with 6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9 not found: ID does not exist" containerID="6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.495995 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9"} err="failed to get container status \"6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9\": rpc error: code = NotFound desc = could not find container \"6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9\": container with ID starting with 6588a9c8d4e1a3542a9f096e7dccefd47489a18c0c0c777c5f681d9bee33def9 not found: ID does not exist" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.496036 4708 scope.go:117] "RemoveContainer" containerID="32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba" Feb 27 18:35:48 crc kubenswrapper[4708]: E0227 18:35:48.496561 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba\": container with ID starting with 32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba not found: ID does not exist" containerID="32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.496590 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba"} err="failed to get container status \"32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba\": rpc error: code = NotFound desc = could not find container \"32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba\": container with ID starting with 32c0c6a13d9cebc18a99fb0c186d35866661c663a2e76cff249a647d1c1c4cba not found: ID does not exist" Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.706968 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-svp7k"] Feb 27 18:35:48 crc kubenswrapper[4708]: I0227 18:35:48.717874 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-svp7k"] Feb 27 18:35:50 crc kubenswrapper[4708]: I0227 18:35:50.244169 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" path="/var/lib/kubelet/pods/445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe/volumes" Feb 27 18:35:53 crc kubenswrapper[4708]: I0227 18:35:53.229552 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:35:53 crc kubenswrapper[4708]: E0227 18:35:53.231169 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.223676 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536956-nv8wd"] Feb 27 18:36:00 crc kubenswrapper[4708]: E0227 18:36:00.225427 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="extract-content" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.225456 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="extract-content" Feb 27 18:36:00 crc kubenswrapper[4708]: E0227 18:36:00.225484 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="extract-utilities" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.225501 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="extract-utilities" Feb 27 18:36:00 crc kubenswrapper[4708]: E0227 18:36:00.225554 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="extract-utilities" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.225573 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="extract-utilities" Feb 27 18:36:00 crc kubenswrapper[4708]: E0227 18:36:00.225596 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="extract-content" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.225610 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="extract-content" Feb 27 18:36:00 crc kubenswrapper[4708]: E0227 18:36:00.225637 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="registry-server" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.225650 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="registry-server" Feb 27 18:36:00 crc kubenswrapper[4708]: E0227 18:36:00.225704 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="registry-server" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.225720 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="registry-server" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.226171 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7dd680ab-5ac1-4191-b892-05dcdae323b1" containerName="registry-server" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.226245 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="445f4c66-1fdc-4dd9-9c7e-363cadcd2bfe" containerName="registry-server" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.228054 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.231970 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.235937 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.236470 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.253929 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536956-nv8wd"] Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.339189 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm7th\" (UniqueName: \"kubernetes.io/projected/b0adf7a7-1796-44d3-9283-843c9748c5d4-kube-api-access-cm7th\") pod \"auto-csr-approver-29536956-nv8wd\" (UID: \"b0adf7a7-1796-44d3-9283-843c9748c5d4\") " pod="openshift-infra/auto-csr-approver-29536956-nv8wd" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.440799 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm7th\" (UniqueName: \"kubernetes.io/projected/b0adf7a7-1796-44d3-9283-843c9748c5d4-kube-api-access-cm7th\") pod \"auto-csr-approver-29536956-nv8wd\" (UID: \"b0adf7a7-1796-44d3-9283-843c9748c5d4\") " pod="openshift-infra/auto-csr-approver-29536956-nv8wd" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.467183 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm7th\" (UniqueName: \"kubernetes.io/projected/b0adf7a7-1796-44d3-9283-843c9748c5d4-kube-api-access-cm7th\") pod \"auto-csr-approver-29536956-nv8wd\" (UID: \"b0adf7a7-1796-44d3-9283-843c9748c5d4\") " pod="openshift-infra/auto-csr-approver-29536956-nv8wd" Feb 27 18:36:00 crc kubenswrapper[4708]: I0227 18:36:00.557569 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" Feb 27 18:36:01 crc kubenswrapper[4708]: I0227 18:36:01.101911 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536956-nv8wd"] Feb 27 18:36:01 crc kubenswrapper[4708]: I0227 18:36:01.510790 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" event={"ID":"b0adf7a7-1796-44d3-9283-843c9748c5d4","Type":"ContainerStarted","Data":"c3db4bf9029c6bc7c9a61b01c0fdf330183bb96c6abe067347986fabd71f9e96"} Feb 27 18:36:02 crc kubenswrapper[4708]: I0227 18:36:02.524768 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" event={"ID":"b0adf7a7-1796-44d3-9283-843c9748c5d4","Type":"ContainerStarted","Data":"99749fcc7811d13a00242e8cc54a1bb1785412ff8b2afe37e5eb5e5e5b19718c"} Feb 27 18:36:02 crc kubenswrapper[4708]: I0227 18:36:02.545205 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" podStartSLOduration=1.458015423 podStartE2EDuration="2.545187056s" podCreationTimestamp="2026-02-27 18:36:00 +0000 UTC" firstStartedPulling="2026-02-27 18:36:01.103659057 +0000 UTC m=+6159.619456644" lastFinishedPulling="2026-02-27 18:36:02.19083068 +0000 UTC m=+6160.706628277" observedRunningTime="2026-02-27 18:36:02.543307643 +0000 UTC m=+6161.059105230" watchObservedRunningTime="2026-02-27 18:36:02.545187056 +0000 UTC m=+6161.060984633" Feb 27 18:36:03 crc kubenswrapper[4708]: I0227 18:36:03.545176 4708 generic.go:334] "Generic (PLEG): container finished" podID="b0adf7a7-1796-44d3-9283-843c9748c5d4" containerID="99749fcc7811d13a00242e8cc54a1bb1785412ff8b2afe37e5eb5e5e5b19718c" exitCode=0 Feb 27 18:36:03 crc kubenswrapper[4708]: I0227 18:36:03.545247 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" event={"ID":"b0adf7a7-1796-44d3-9283-843c9748c5d4","Type":"ContainerDied","Data":"99749fcc7811d13a00242e8cc54a1bb1785412ff8b2afe37e5eb5e5e5b19718c"} Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.082543 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.159911 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm7th\" (UniqueName: \"kubernetes.io/projected/b0adf7a7-1796-44d3-9283-843c9748c5d4-kube-api-access-cm7th\") pod \"b0adf7a7-1796-44d3-9283-843c9748c5d4\" (UID: \"b0adf7a7-1796-44d3-9283-843c9748c5d4\") " Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.168357 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0adf7a7-1796-44d3-9283-843c9748c5d4-kube-api-access-cm7th" (OuterVolumeSpecName: "kube-api-access-cm7th") pod "b0adf7a7-1796-44d3-9283-843c9748c5d4" (UID: "b0adf7a7-1796-44d3-9283-843c9748c5d4"). InnerVolumeSpecName "kube-api-access-cm7th". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.262541 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm7th\" (UniqueName: \"kubernetes.io/projected/b0adf7a7-1796-44d3-9283-843c9748c5d4-kube-api-access-cm7th\") on node \"crc\" DevicePath \"\"" Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.341334 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536950-mpkfr"] Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.353612 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536950-mpkfr"] Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.628101 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" event={"ID":"b0adf7a7-1796-44d3-9283-843c9748c5d4","Type":"ContainerDied","Data":"c3db4bf9029c6bc7c9a61b01c0fdf330183bb96c6abe067347986fabd71f9e96"} Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.628163 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c3db4bf9029c6bc7c9a61b01c0fdf330183bb96c6abe067347986fabd71f9e96" Feb 27 18:36:05 crc kubenswrapper[4708]: I0227 18:36:05.628236 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536956-nv8wd" Feb 27 18:36:06 crc kubenswrapper[4708]: I0227 18:36:06.247683 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="812d28f8-6380-4708-8aaf-cc2d7f91c736" path="/var/lib/kubelet/pods/812d28f8-6380-4708-8aaf-cc2d7f91c736/volumes" Feb 27 18:36:07 crc kubenswrapper[4708]: I0227 18:36:07.228791 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:36:07 crc kubenswrapper[4708]: I0227 18:36:07.659407 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"faeacfb7132d987bcba0b0c2aee99f4b268383805d0d1f0a0e2e47f12f9018e5"} Feb 27 18:37:06 crc kubenswrapper[4708]: I0227 18:37:06.369488 4708 scope.go:117] "RemoveContainer" containerID="dcb05190da20622f72d75973240430fc1e89dac32d274aca8ee7664f9691506e" Feb 27 18:37:06 crc kubenswrapper[4708]: I0227 18:37:06.418360 4708 scope.go:117] "RemoveContainer" containerID="af88185df3e3c2aed3db820897f58690c860959615ea697c9b978cb3f02912ff" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.159872 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536958-tsgcc"] Feb 27 18:38:00 crc kubenswrapper[4708]: E0227 18:38:00.161437 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0adf7a7-1796-44d3-9283-843c9748c5d4" containerName="oc" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.161461 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0adf7a7-1796-44d3-9283-843c9748c5d4" containerName="oc" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.161814 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0adf7a7-1796-44d3-9283-843c9748c5d4" containerName="oc" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.163090 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536958-tsgcc" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.165760 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.166755 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.166894 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.192005 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536958-tsgcc"] Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.283329 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1-kube-api-access-95hrg\") pod \"auto-csr-approver-29536958-tsgcc\" (UID: \"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1\") " pod="openshift-infra/auto-csr-approver-29536958-tsgcc" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.387894 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1-kube-api-access-95hrg\") pod \"auto-csr-approver-29536958-tsgcc\" (UID: \"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1\") " pod="openshift-infra/auto-csr-approver-29536958-tsgcc" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.414804 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1-kube-api-access-95hrg\") pod \"auto-csr-approver-29536958-tsgcc\" (UID: \"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1\") " pod="openshift-infra/auto-csr-approver-29536958-tsgcc" Feb 27 18:38:00 crc kubenswrapper[4708]: I0227 18:38:00.486138 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536958-tsgcc" Feb 27 18:38:01 crc kubenswrapper[4708]: I0227 18:38:01.143107 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536958-tsgcc"] Feb 27 18:38:02 crc kubenswrapper[4708]: I0227 18:38:02.037301 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536958-tsgcc" event={"ID":"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1","Type":"ContainerStarted","Data":"0e69a78c240096839081d780b07f122f9458f9ff600e9a57dcc6822739fa55e9"} Feb 27 18:38:03 crc kubenswrapper[4708]: I0227 18:38:03.051933 4708 generic.go:334] "Generic (PLEG): container finished" podID="406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1" containerID="d251394c46c6a4ff863e894b602fe17ffd9016f88dbcc5cf2246d00df8a641a1" exitCode=0 Feb 27 18:38:03 crc kubenswrapper[4708]: I0227 18:38:03.052062 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536958-tsgcc" event={"ID":"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1","Type":"ContainerDied","Data":"d251394c46c6a4ff863e894b602fe17ffd9016f88dbcc5cf2246d00df8a641a1"} Feb 27 18:38:04 crc kubenswrapper[4708]: I0227 18:38:04.550696 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536958-tsgcc" Feb 27 18:38:04 crc kubenswrapper[4708]: I0227 18:38:04.660073 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1-kube-api-access-95hrg\") pod \"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1\" (UID: \"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1\") " Feb 27 18:38:04 crc kubenswrapper[4708]: I0227 18:38:04.668154 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1-kube-api-access-95hrg" (OuterVolumeSpecName: "kube-api-access-95hrg") pod "406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1" (UID: "406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1"). InnerVolumeSpecName "kube-api-access-95hrg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:38:04 crc kubenswrapper[4708]: I0227 18:38:04.762987 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95hrg\" (UniqueName: \"kubernetes.io/projected/406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1-kube-api-access-95hrg\") on node \"crc\" DevicePath \"\"" Feb 27 18:38:05 crc kubenswrapper[4708]: I0227 18:38:05.094876 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536958-tsgcc" event={"ID":"406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1","Type":"ContainerDied","Data":"0e69a78c240096839081d780b07f122f9458f9ff600e9a57dcc6822739fa55e9"} Feb 27 18:38:05 crc kubenswrapper[4708]: I0227 18:38:05.094920 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e69a78c240096839081d780b07f122f9458f9ff600e9a57dcc6822739fa55e9" Feb 27 18:38:05 crc kubenswrapper[4708]: I0227 18:38:05.095050 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536958-tsgcc" Feb 27 18:38:05 crc kubenswrapper[4708]: I0227 18:38:05.629953 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536952-vhfxl"] Feb 27 18:38:05 crc kubenswrapper[4708]: I0227 18:38:05.648135 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536952-vhfxl"] Feb 27 18:38:06 crc kubenswrapper[4708]: I0227 18:38:06.245057 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbff0a68-717b-4cac-a965-877ac0ba1767" path="/var/lib/kubelet/pods/dbff0a68-717b-4cac-a965-877ac0ba1767/volumes" Feb 27 18:38:35 crc kubenswrapper[4708]: I0227 18:38:35.631983 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:38:35 crc kubenswrapper[4708]: I0227 18:38:35.632915 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:39:05 crc kubenswrapper[4708]: I0227 18:39:05.633530 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:39:05 crc kubenswrapper[4708]: I0227 18:39:05.634415 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:39:06 crc kubenswrapper[4708]: I0227 18:39:06.570361 4708 scope.go:117] "RemoveContainer" containerID="902e8f51f1b155399d3a2a8d58cb27863f638772cfe18a4137c9a1f5e6a0dae9" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.134365 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6bk4t"] Feb 27 18:39:33 crc kubenswrapper[4708]: E0227 18:39:33.136072 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1" containerName="oc" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.136105 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1" containerName="oc" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.136633 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1" containerName="oc" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.140454 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.160703 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6bk4t"] Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.283595 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cm2d2\" (UniqueName: \"kubernetes.io/projected/a893092a-b34d-4adc-9751-1a4b92fd22a9-kube-api-access-cm2d2\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.283914 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-utilities\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.283993 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-catalog-content\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.385767 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-utilities\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.386125 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-catalog-content\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.386251 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cm2d2\" (UniqueName: \"kubernetes.io/projected/a893092a-b34d-4adc-9751-1a4b92fd22a9-kube-api-access-cm2d2\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.386603 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-utilities\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.387633 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-catalog-content\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.407387 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cm2d2\" (UniqueName: \"kubernetes.io/projected/a893092a-b34d-4adc-9751-1a4b92fd22a9-kube-api-access-cm2d2\") pod \"community-operators-6bk4t\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:33 crc kubenswrapper[4708]: I0227 18:39:33.463285 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:34 crc kubenswrapper[4708]: I0227 18:39:34.023123 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6bk4t"] Feb 27 18:39:34 crc kubenswrapper[4708]: I0227 18:39:34.323213 4708 generic.go:334] "Generic (PLEG): container finished" podID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerID="2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed" exitCode=0 Feb 27 18:39:34 crc kubenswrapper[4708]: I0227 18:39:34.323270 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bk4t" event={"ID":"a893092a-b34d-4adc-9751-1a4b92fd22a9","Type":"ContainerDied","Data":"2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed"} Feb 27 18:39:34 crc kubenswrapper[4708]: I0227 18:39:34.323545 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bk4t" event={"ID":"a893092a-b34d-4adc-9751-1a4b92fd22a9","Type":"ContainerStarted","Data":"9a734bc9f00f64b5dd45d384a12cd68d8aeef12f37539e271f8f86010f7b77cd"} Feb 27 18:39:35 crc kubenswrapper[4708]: I0227 18:39:35.342777 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bk4t" event={"ID":"a893092a-b34d-4adc-9751-1a4b92fd22a9","Type":"ContainerStarted","Data":"501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349"} Feb 27 18:39:35 crc kubenswrapper[4708]: I0227 18:39:35.631754 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:39:35 crc kubenswrapper[4708]: I0227 18:39:35.631840 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:39:35 crc kubenswrapper[4708]: I0227 18:39:35.631936 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:39:35 crc kubenswrapper[4708]: I0227 18:39:35.633366 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"faeacfb7132d987bcba0b0c2aee99f4b268383805d0d1f0a0e2e47f12f9018e5"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:39:35 crc kubenswrapper[4708]: I0227 18:39:35.633496 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://faeacfb7132d987bcba0b0c2aee99f4b268383805d0d1f0a0e2e47f12f9018e5" gracePeriod=600 Feb 27 18:39:36 crc kubenswrapper[4708]: I0227 18:39:36.357328 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="faeacfb7132d987bcba0b0c2aee99f4b268383805d0d1f0a0e2e47f12f9018e5" exitCode=0 Feb 27 18:39:36 crc kubenswrapper[4708]: I0227 18:39:36.357393 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"faeacfb7132d987bcba0b0c2aee99f4b268383805d0d1f0a0e2e47f12f9018e5"} Feb 27 18:39:36 crc kubenswrapper[4708]: I0227 18:39:36.358942 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34"} Feb 27 18:39:36 crc kubenswrapper[4708]: I0227 18:39:36.358994 4708 scope.go:117] "RemoveContainer" containerID="f7776ed9b5831304b42a9ce9cb4375143eb7ca9bcd1b69d32f13fe24da5428dc" Feb 27 18:39:36 crc kubenswrapper[4708]: I0227 18:39:36.362209 4708 generic.go:334] "Generic (PLEG): container finished" podID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerID="501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349" exitCode=0 Feb 27 18:39:36 crc kubenswrapper[4708]: I0227 18:39:36.362251 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bk4t" event={"ID":"a893092a-b34d-4adc-9751-1a4b92fd22a9","Type":"ContainerDied","Data":"501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349"} Feb 27 18:39:36 crc kubenswrapper[4708]: I0227 18:39:36.364352 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:39:37 crc kubenswrapper[4708]: I0227 18:39:37.380079 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bk4t" event={"ID":"a893092a-b34d-4adc-9751-1a4b92fd22a9","Type":"ContainerStarted","Data":"f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930"} Feb 27 18:39:37 crc kubenswrapper[4708]: I0227 18:39:37.412545 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6bk4t" podStartSLOduration=1.676935563 podStartE2EDuration="4.412526965s" podCreationTimestamp="2026-02-27 18:39:33 +0000 UTC" firstStartedPulling="2026-02-27 18:39:34.325094897 +0000 UTC m=+6372.840892484" lastFinishedPulling="2026-02-27 18:39:37.060686299 +0000 UTC m=+6375.576483886" observedRunningTime="2026-02-27 18:39:37.405716432 +0000 UTC m=+6375.921514049" watchObservedRunningTime="2026-02-27 18:39:37.412526965 +0000 UTC m=+6375.928324552" Feb 27 18:39:43 crc kubenswrapper[4708]: I0227 18:39:43.464444 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:43 crc kubenswrapper[4708]: I0227 18:39:43.465503 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:43 crc kubenswrapper[4708]: I0227 18:39:43.626634 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:44 crc kubenswrapper[4708]: I0227 18:39:44.553698 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:44 crc kubenswrapper[4708]: I0227 18:39:44.875595 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6bk4t"] Feb 27 18:39:46 crc kubenswrapper[4708]: I0227 18:39:46.490937 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6bk4t" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="registry-server" containerID="cri-o://f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930" gracePeriod=2 Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.217895 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.372294 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-catalog-content\") pod \"a893092a-b34d-4adc-9751-1a4b92fd22a9\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.372446 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cm2d2\" (UniqueName: \"kubernetes.io/projected/a893092a-b34d-4adc-9751-1a4b92fd22a9-kube-api-access-cm2d2\") pod \"a893092a-b34d-4adc-9751-1a4b92fd22a9\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.372470 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-utilities\") pod \"a893092a-b34d-4adc-9751-1a4b92fd22a9\" (UID: \"a893092a-b34d-4adc-9751-1a4b92fd22a9\") " Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.377785 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-utilities" (OuterVolumeSpecName: "utilities") pod "a893092a-b34d-4adc-9751-1a4b92fd22a9" (UID: "a893092a-b34d-4adc-9751-1a4b92fd22a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.381307 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a893092a-b34d-4adc-9751-1a4b92fd22a9-kube-api-access-cm2d2" (OuterVolumeSpecName: "kube-api-access-cm2d2") pod "a893092a-b34d-4adc-9751-1a4b92fd22a9" (UID: "a893092a-b34d-4adc-9751-1a4b92fd22a9"). InnerVolumeSpecName "kube-api-access-cm2d2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.422741 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a893092a-b34d-4adc-9751-1a4b92fd22a9" (UID: "a893092a-b34d-4adc-9751-1a4b92fd22a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.476114 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.476209 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cm2d2\" (UniqueName: \"kubernetes.io/projected/a893092a-b34d-4adc-9751-1a4b92fd22a9-kube-api-access-cm2d2\") on node \"crc\" DevicePath \"\"" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.476234 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a893092a-b34d-4adc-9751-1a4b92fd22a9-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.506909 4708 generic.go:334] "Generic (PLEG): container finished" podID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerID="f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930" exitCode=0 Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.506976 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bk4t" event={"ID":"a893092a-b34d-4adc-9751-1a4b92fd22a9","Type":"ContainerDied","Data":"f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930"} Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.507056 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6bk4t" event={"ID":"a893092a-b34d-4adc-9751-1a4b92fd22a9","Type":"ContainerDied","Data":"9a734bc9f00f64b5dd45d384a12cd68d8aeef12f37539e271f8f86010f7b77cd"} Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.507060 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6bk4t" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.507085 4708 scope.go:117] "RemoveContainer" containerID="f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.551876 4708 scope.go:117] "RemoveContainer" containerID="501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.580120 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6bk4t"] Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.589439 4708 scope.go:117] "RemoveContainer" containerID="2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.594573 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6bk4t"] Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.639720 4708 scope.go:117] "RemoveContainer" containerID="f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930" Feb 27 18:39:47 crc kubenswrapper[4708]: E0227 18:39:47.640290 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930\": container with ID starting with f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930 not found: ID does not exist" containerID="f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.640329 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930"} err="failed to get container status \"f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930\": rpc error: code = NotFound desc = could not find container \"f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930\": container with ID starting with f3a3af8a299fbee20d5f2ada43c1c78f17575b3fe3c40228acd770977bcc9930 not found: ID does not exist" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.640356 4708 scope.go:117] "RemoveContainer" containerID="501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349" Feb 27 18:39:47 crc kubenswrapper[4708]: E0227 18:39:47.640820 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349\": container with ID starting with 501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349 not found: ID does not exist" containerID="501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.640887 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349"} err="failed to get container status \"501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349\": rpc error: code = NotFound desc = could not find container \"501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349\": container with ID starting with 501388d0e3a8d30e29ee5996850d8aafaf0c75cf0ba143e01a6e4ade73015349 not found: ID does not exist" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.640918 4708 scope.go:117] "RemoveContainer" containerID="2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed" Feb 27 18:39:47 crc kubenswrapper[4708]: E0227 18:39:47.641430 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed\": container with ID starting with 2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed not found: ID does not exist" containerID="2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed" Feb 27 18:39:47 crc kubenswrapper[4708]: I0227 18:39:47.641457 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed"} err="failed to get container status \"2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed\": rpc error: code = NotFound desc = could not find container \"2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed\": container with ID starting with 2aafb25217e051f7a5927db0a47345be0853cfa35339e69acb1ea913d5ae43ed not found: ID does not exist" Feb 27 18:39:48 crc kubenswrapper[4708]: I0227 18:39:48.247813 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" path="/var/lib/kubelet/pods/a893092a-b34d-4adc-9751-1a4b92fd22a9/volumes" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.177735 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536960-wb62j"] Feb 27 18:40:00 crc kubenswrapper[4708]: E0227 18:40:00.179436 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="registry-server" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.179458 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="registry-server" Feb 27 18:40:00 crc kubenswrapper[4708]: E0227 18:40:00.179495 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="extract-content" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.179504 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="extract-content" Feb 27 18:40:00 crc kubenswrapper[4708]: E0227 18:40:00.179519 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="extract-utilities" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.179529 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="extract-utilities" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.179895 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a893092a-b34d-4adc-9751-1a4b92fd22a9" containerName="registry-server" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.181357 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536960-wb62j" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.184296 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.184417 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.184447 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.191251 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536960-wb62j"] Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.251059 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qs9sb\" (UniqueName: \"kubernetes.io/projected/23538ea1-6ab7-4e9f-85b4-858f93b5ac57-kube-api-access-qs9sb\") pod \"auto-csr-approver-29536960-wb62j\" (UID: \"23538ea1-6ab7-4e9f-85b4-858f93b5ac57\") " pod="openshift-infra/auto-csr-approver-29536960-wb62j" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.353652 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qs9sb\" (UniqueName: \"kubernetes.io/projected/23538ea1-6ab7-4e9f-85b4-858f93b5ac57-kube-api-access-qs9sb\") pod \"auto-csr-approver-29536960-wb62j\" (UID: \"23538ea1-6ab7-4e9f-85b4-858f93b5ac57\") " pod="openshift-infra/auto-csr-approver-29536960-wb62j" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.376651 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qs9sb\" (UniqueName: \"kubernetes.io/projected/23538ea1-6ab7-4e9f-85b4-858f93b5ac57-kube-api-access-qs9sb\") pod \"auto-csr-approver-29536960-wb62j\" (UID: \"23538ea1-6ab7-4e9f-85b4-858f93b5ac57\") " pod="openshift-infra/auto-csr-approver-29536960-wb62j" Feb 27 18:40:00 crc kubenswrapper[4708]: I0227 18:40:00.526138 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536960-wb62j" Feb 27 18:40:01 crc kubenswrapper[4708]: I0227 18:40:01.052921 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536960-wb62j"] Feb 27 18:40:01 crc kubenswrapper[4708]: I0227 18:40:01.722061 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536960-wb62j" event={"ID":"23538ea1-6ab7-4e9f-85b4-858f93b5ac57","Type":"ContainerStarted","Data":"f2dc807c9ddf93bec6904d798f5fdb82dadad1b0fb5e3b22144e19a716ee947f"} Feb 27 18:40:02 crc kubenswrapper[4708]: I0227 18:40:02.746091 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536960-wb62j" event={"ID":"23538ea1-6ab7-4e9f-85b4-858f93b5ac57","Type":"ContainerStarted","Data":"65827f0a9f6deb03e43794efecc4a41b96c0c224977c289c0f42fc4645345913"} Feb 27 18:40:02 crc kubenswrapper[4708]: I0227 18:40:02.771219 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536960-wb62j" podStartSLOduration=1.653282934 podStartE2EDuration="2.771200805s" podCreationTimestamp="2026-02-27 18:40:00 +0000 UTC" firstStartedPulling="2026-02-27 18:40:01.066108445 +0000 UTC m=+6399.581906032" lastFinishedPulling="2026-02-27 18:40:02.184026306 +0000 UTC m=+6400.699823903" observedRunningTime="2026-02-27 18:40:02.762353325 +0000 UTC m=+6401.278150922" watchObservedRunningTime="2026-02-27 18:40:02.771200805 +0000 UTC m=+6401.286998392" Feb 27 18:40:03 crc kubenswrapper[4708]: I0227 18:40:03.758729 4708 generic.go:334] "Generic (PLEG): container finished" podID="23538ea1-6ab7-4e9f-85b4-858f93b5ac57" containerID="65827f0a9f6deb03e43794efecc4a41b96c0c224977c289c0f42fc4645345913" exitCode=0 Feb 27 18:40:03 crc kubenswrapper[4708]: I0227 18:40:03.758831 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536960-wb62j" event={"ID":"23538ea1-6ab7-4e9f-85b4-858f93b5ac57","Type":"ContainerDied","Data":"65827f0a9f6deb03e43794efecc4a41b96c0c224977c289c0f42fc4645345913"} Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.209831 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536960-wb62j" Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.369440 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536954-r8xf4"] Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.380171 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536954-r8xf4"] Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.389911 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs9sb\" (UniqueName: \"kubernetes.io/projected/23538ea1-6ab7-4e9f-85b4-858f93b5ac57-kube-api-access-qs9sb\") pod \"23538ea1-6ab7-4e9f-85b4-858f93b5ac57\" (UID: \"23538ea1-6ab7-4e9f-85b4-858f93b5ac57\") " Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.404385 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23538ea1-6ab7-4e9f-85b4-858f93b5ac57-kube-api-access-qs9sb" (OuterVolumeSpecName: "kube-api-access-qs9sb") pod "23538ea1-6ab7-4e9f-85b4-858f93b5ac57" (UID: "23538ea1-6ab7-4e9f-85b4-858f93b5ac57"). InnerVolumeSpecName "kube-api-access-qs9sb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.493156 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs9sb\" (UniqueName: \"kubernetes.io/projected/23538ea1-6ab7-4e9f-85b4-858f93b5ac57-kube-api-access-qs9sb\") on node \"crc\" DevicePath \"\"" Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.782785 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536960-wb62j" event={"ID":"23538ea1-6ab7-4e9f-85b4-858f93b5ac57","Type":"ContainerDied","Data":"f2dc807c9ddf93bec6904d798f5fdb82dadad1b0fb5e3b22144e19a716ee947f"} Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.783414 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f2dc807c9ddf93bec6904d798f5fdb82dadad1b0fb5e3b22144e19a716ee947f" Feb 27 18:40:05 crc kubenswrapper[4708]: I0227 18:40:05.783054 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536960-wb62j" Feb 27 18:40:06 crc kubenswrapper[4708]: I0227 18:40:06.243064 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23389bc5-d111-440f-ba89-725fe3946947" path="/var/lib/kubelet/pods/23389bc5-d111-440f-ba89-725fe3946947/volumes" Feb 27 18:40:06 crc kubenswrapper[4708]: I0227 18:40:06.684611 4708 scope.go:117] "RemoveContainer" containerID="e28f2293e155d0e0293cf9150861ccb4bc4029e6c83e6773bba46c098d5efe3d" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.342636 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5x2zc"] Feb 27 18:40:18 crc kubenswrapper[4708]: E0227 18:40:18.344056 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23538ea1-6ab7-4e9f-85b4-858f93b5ac57" containerName="oc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.344079 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="23538ea1-6ab7-4e9f-85b4-858f93b5ac57" containerName="oc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.344543 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="23538ea1-6ab7-4e9f-85b4-858f93b5ac57" containerName="oc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.347363 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.379951 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x2zc"] Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.447190 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-utilities\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.447275 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h7tcq\" (UniqueName: \"kubernetes.io/projected/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-kube-api-access-h7tcq\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.447309 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-catalog-content\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.549722 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h7tcq\" (UniqueName: \"kubernetes.io/projected/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-kube-api-access-h7tcq\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.549779 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-catalog-content\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.549946 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-utilities\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.550467 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-catalog-content\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.550520 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-utilities\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.574235 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h7tcq\" (UniqueName: \"kubernetes.io/projected/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-kube-api-access-h7tcq\") pod \"certified-operators-5x2zc\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:18 crc kubenswrapper[4708]: I0227 18:40:18.680754 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:19 crc kubenswrapper[4708]: I0227 18:40:19.189663 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5x2zc"] Feb 27 18:40:19 crc kubenswrapper[4708]: I0227 18:40:19.999383 4708 generic.go:334] "Generic (PLEG): container finished" podID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerID="6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5" exitCode=0 Feb 27 18:40:20 crc kubenswrapper[4708]: I0227 18:40:19.999986 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x2zc" event={"ID":"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f","Type":"ContainerDied","Data":"6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5"} Feb 27 18:40:20 crc kubenswrapper[4708]: I0227 18:40:20.000050 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x2zc" event={"ID":"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f","Type":"ContainerStarted","Data":"d51a6f9c73b3c5e4ff77fd3baa0340dc422d15629fe83aab8d115c669b8951e2"} Feb 27 18:40:21 crc kubenswrapper[4708]: I0227 18:40:21.033560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x2zc" event={"ID":"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f","Type":"ContainerStarted","Data":"04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b"} Feb 27 18:40:22 crc kubenswrapper[4708]: I0227 18:40:22.049662 4708 generic.go:334] "Generic (PLEG): container finished" podID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerID="04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b" exitCode=0 Feb 27 18:40:22 crc kubenswrapper[4708]: I0227 18:40:22.049735 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x2zc" event={"ID":"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f","Type":"ContainerDied","Data":"04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b"} Feb 27 18:40:23 crc kubenswrapper[4708]: I0227 18:40:23.066438 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x2zc" event={"ID":"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f","Type":"ContainerStarted","Data":"3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f"} Feb 27 18:40:23 crc kubenswrapper[4708]: I0227 18:40:23.096470 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5x2zc" podStartSLOduration=2.628381986 podStartE2EDuration="5.096442186s" podCreationTimestamp="2026-02-27 18:40:18 +0000 UTC" firstStartedPulling="2026-02-27 18:40:20.002137683 +0000 UTC m=+6418.517935300" lastFinishedPulling="2026-02-27 18:40:22.470197883 +0000 UTC m=+6420.985995500" observedRunningTime="2026-02-27 18:40:23.08594318 +0000 UTC m=+6421.601740777" watchObservedRunningTime="2026-02-27 18:40:23.096442186 +0000 UTC m=+6421.612239783" Feb 27 18:40:28 crc kubenswrapper[4708]: I0227 18:40:28.681265 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:28 crc kubenswrapper[4708]: I0227 18:40:28.682720 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:28 crc kubenswrapper[4708]: I0227 18:40:28.784303 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:29 crc kubenswrapper[4708]: I0227 18:40:29.182295 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:29 crc kubenswrapper[4708]: I0227 18:40:29.244993 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x2zc"] Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.148677 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5x2zc" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="registry-server" containerID="cri-o://3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f" gracePeriod=2 Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.721678 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.819546 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-catalog-content\") pod \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.819675 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-utilities\") pod \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.819750 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h7tcq\" (UniqueName: \"kubernetes.io/projected/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-kube-api-access-h7tcq\") pod \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\" (UID: \"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f\") " Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.820613 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-utilities" (OuterVolumeSpecName: "utilities") pod "89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" (UID: "89443d54-f3b0-4a7f-8cbf-0f96879ecb0f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.830083 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-kube-api-access-h7tcq" (OuterVolumeSpecName: "kube-api-access-h7tcq") pod "89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" (UID: "89443d54-f3b0-4a7f-8cbf-0f96879ecb0f"). InnerVolumeSpecName "kube-api-access-h7tcq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.877069 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" (UID: "89443d54-f3b0-4a7f-8cbf-0f96879ecb0f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.921868 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h7tcq\" (UniqueName: \"kubernetes.io/projected/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-kube-api-access-h7tcq\") on node \"crc\" DevicePath \"\"" Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.921905 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:40:31 crc kubenswrapper[4708]: I0227 18:40:31.921915 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.164556 4708 generic.go:334] "Generic (PLEG): container finished" podID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerID="3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f" exitCode=0 Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.164623 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x2zc" event={"ID":"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f","Type":"ContainerDied","Data":"3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f"} Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.165136 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5x2zc" event={"ID":"89443d54-f3b0-4a7f-8cbf-0f96879ecb0f","Type":"ContainerDied","Data":"d51a6f9c73b3c5e4ff77fd3baa0340dc422d15629fe83aab8d115c669b8951e2"} Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.165174 4708 scope.go:117] "RemoveContainer" containerID="3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.164652 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5x2zc" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.202677 4708 scope.go:117] "RemoveContainer" containerID="04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.248370 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5x2zc"] Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.248428 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5x2zc"] Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.253396 4708 scope.go:117] "RemoveContainer" containerID="6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.289110 4708 scope.go:117] "RemoveContainer" containerID="3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f" Feb 27 18:40:32 crc kubenswrapper[4708]: E0227 18:40:32.289763 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f\": container with ID starting with 3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f not found: ID does not exist" containerID="3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.289896 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f"} err="failed to get container status \"3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f\": rpc error: code = NotFound desc = could not find container \"3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f\": container with ID starting with 3446a05cf431bbfa8e670a403b006373eaed012bb279b2899129cdae03ecf97f not found: ID does not exist" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.289982 4708 scope.go:117] "RemoveContainer" containerID="04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b" Feb 27 18:40:32 crc kubenswrapper[4708]: E0227 18:40:32.291022 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b\": container with ID starting with 04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b not found: ID does not exist" containerID="04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.291086 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b"} err="failed to get container status \"04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b\": rpc error: code = NotFound desc = could not find container \"04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b\": container with ID starting with 04ed03de43f4c4a02c1d330412b693dc33e0470a4d16e317d6f223ba3446a65b not found: ID does not exist" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.291129 4708 scope.go:117] "RemoveContainer" containerID="6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5" Feb 27 18:40:32 crc kubenswrapper[4708]: E0227 18:40:32.291582 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5\": container with ID starting with 6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5 not found: ID does not exist" containerID="6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5" Feb 27 18:40:32 crc kubenswrapper[4708]: I0227 18:40:32.291869 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5"} err="failed to get container status \"6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5\": rpc error: code = NotFound desc = could not find container \"6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5\": container with ID starting with 6a35991044bdff00f417659505e673af413b883c6fea616a21e267e91bdc72b5 not found: ID does not exist" Feb 27 18:40:34 crc kubenswrapper[4708]: I0227 18:40:34.248107 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" path="/var/lib/kubelet/pods/89443d54-f3b0-4a7f-8cbf-0f96879ecb0f/volumes" Feb 27 18:41:06 crc kubenswrapper[4708]: I0227 18:41:06.833736 4708 scope.go:117] "RemoveContainer" containerID="137aaff0b8cbf1d884028fdabaa9794a49a81c177b74175e2d479df8c3693455" Feb 27 18:41:35 crc kubenswrapper[4708]: I0227 18:41:35.631586 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:41:35 crc kubenswrapper[4708]: I0227 18:41:35.632649 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.163148 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536962-rbrxs"] Feb 27 18:42:00 crc kubenswrapper[4708]: E0227 18:42:00.164278 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="extract-utilities" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.164293 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="extract-utilities" Feb 27 18:42:00 crc kubenswrapper[4708]: E0227 18:42:00.164325 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="extract-content" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.164331 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="extract-content" Feb 27 18:42:00 crc kubenswrapper[4708]: E0227 18:42:00.164356 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="registry-server" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.164362 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="registry-server" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.164597 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="89443d54-f3b0-4a7f-8cbf-0f96879ecb0f" containerName="registry-server" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.165396 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536962-rbrxs" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.168365 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.169643 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.169674 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.176396 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536962-rbrxs"] Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.248357 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n52jp\" (UniqueName: \"kubernetes.io/projected/79bf828f-3371-4268-9a09-f647ee2f7716-kube-api-access-n52jp\") pod \"auto-csr-approver-29536962-rbrxs\" (UID: \"79bf828f-3371-4268-9a09-f647ee2f7716\") " pod="openshift-infra/auto-csr-approver-29536962-rbrxs" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.350945 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n52jp\" (UniqueName: \"kubernetes.io/projected/79bf828f-3371-4268-9a09-f647ee2f7716-kube-api-access-n52jp\") pod \"auto-csr-approver-29536962-rbrxs\" (UID: \"79bf828f-3371-4268-9a09-f647ee2f7716\") " pod="openshift-infra/auto-csr-approver-29536962-rbrxs" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.399806 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n52jp\" (UniqueName: \"kubernetes.io/projected/79bf828f-3371-4268-9a09-f647ee2f7716-kube-api-access-n52jp\") pod \"auto-csr-approver-29536962-rbrxs\" (UID: \"79bf828f-3371-4268-9a09-f647ee2f7716\") " pod="openshift-infra/auto-csr-approver-29536962-rbrxs" Feb 27 18:42:00 crc kubenswrapper[4708]: I0227 18:42:00.496898 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536962-rbrxs" Feb 27 18:42:01 crc kubenswrapper[4708]: I0227 18:42:01.064664 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536962-rbrxs"] Feb 27 18:42:01 crc kubenswrapper[4708]: I0227 18:42:01.329912 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536962-rbrxs" event={"ID":"79bf828f-3371-4268-9a09-f647ee2f7716","Type":"ContainerStarted","Data":"d80bd1a7ba076b5db5fe3ec3029d416e4fadab6df92d58e3a160a06ce9467dda"} Feb 27 18:42:03 crc kubenswrapper[4708]: I0227 18:42:03.355832 4708 generic.go:334] "Generic (PLEG): container finished" podID="79bf828f-3371-4268-9a09-f647ee2f7716" containerID="986fda1fff6e551f9a1d5fcf85943c2e72ae31387282b060a24837d7a003e6f3" exitCode=0 Feb 27 18:42:03 crc kubenswrapper[4708]: I0227 18:42:03.355999 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536962-rbrxs" event={"ID":"79bf828f-3371-4268-9a09-f647ee2f7716","Type":"ContainerDied","Data":"986fda1fff6e551f9a1d5fcf85943c2e72ae31387282b060a24837d7a003e6f3"} Feb 27 18:42:04 crc kubenswrapper[4708]: I0227 18:42:04.846472 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536962-rbrxs" Feb 27 18:42:04 crc kubenswrapper[4708]: I0227 18:42:04.859563 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n52jp\" (UniqueName: \"kubernetes.io/projected/79bf828f-3371-4268-9a09-f647ee2f7716-kube-api-access-n52jp\") pod \"79bf828f-3371-4268-9a09-f647ee2f7716\" (UID: \"79bf828f-3371-4268-9a09-f647ee2f7716\") " Feb 27 18:42:04 crc kubenswrapper[4708]: I0227 18:42:04.867450 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79bf828f-3371-4268-9a09-f647ee2f7716-kube-api-access-n52jp" (OuterVolumeSpecName: "kube-api-access-n52jp") pod "79bf828f-3371-4268-9a09-f647ee2f7716" (UID: "79bf828f-3371-4268-9a09-f647ee2f7716"). InnerVolumeSpecName "kube-api-access-n52jp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:42:04 crc kubenswrapper[4708]: I0227 18:42:04.963019 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n52jp\" (UniqueName: \"kubernetes.io/projected/79bf828f-3371-4268-9a09-f647ee2f7716-kube-api-access-n52jp\") on node \"crc\" DevicePath \"\"" Feb 27 18:42:05 crc kubenswrapper[4708]: I0227 18:42:05.384264 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536962-rbrxs" event={"ID":"79bf828f-3371-4268-9a09-f647ee2f7716","Type":"ContainerDied","Data":"d80bd1a7ba076b5db5fe3ec3029d416e4fadab6df92d58e3a160a06ce9467dda"} Feb 27 18:42:05 crc kubenswrapper[4708]: I0227 18:42:05.384324 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536962-rbrxs" Feb 27 18:42:05 crc kubenswrapper[4708]: I0227 18:42:05.384334 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d80bd1a7ba076b5db5fe3ec3029d416e4fadab6df92d58e3a160a06ce9467dda" Feb 27 18:42:05 crc kubenswrapper[4708]: I0227 18:42:05.631532 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:42:05 crc kubenswrapper[4708]: I0227 18:42:05.631924 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:42:05 crc kubenswrapper[4708]: I0227 18:42:05.946058 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536956-nv8wd"] Feb 27 18:42:05 crc kubenswrapper[4708]: I0227 18:42:05.961965 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536956-nv8wd"] Feb 27 18:42:06 crc kubenswrapper[4708]: I0227 18:42:06.246485 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0adf7a7-1796-44d3-9283-843c9748c5d4" path="/var/lib/kubelet/pods/b0adf7a7-1796-44d3-9283-843c9748c5d4/volumes" Feb 27 18:42:06 crc kubenswrapper[4708]: I0227 18:42:06.954451 4708 scope.go:117] "RemoveContainer" containerID="99749fcc7811d13a00242e8cc54a1bb1785412ff8b2afe37e5eb5e5e5b19718c" Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.631933 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.632664 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.632714 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.633329 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.633394 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" gracePeriod=600 Feb 27 18:42:35 crc kubenswrapper[4708]: E0227 18:42:35.759620 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.761260 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" exitCode=0 Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.761307 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34"} Feb 27 18:42:35 crc kubenswrapper[4708]: I0227 18:42:35.761342 4708 scope.go:117] "RemoveContainer" containerID="faeacfb7132d987bcba0b0c2aee99f4b268383805d0d1f0a0e2e47f12f9018e5" Feb 27 18:42:36 crc kubenswrapper[4708]: I0227 18:42:36.781397 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:42:36 crc kubenswrapper[4708]: E0227 18:42:36.782227 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:42:49 crc kubenswrapper[4708]: I0227 18:42:49.229000 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:42:49 crc kubenswrapper[4708]: E0227 18:42:49.229894 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:43:04 crc kubenswrapper[4708]: I0227 18:43:04.228890 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:43:04 crc kubenswrapper[4708]: E0227 18:43:04.230334 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:43:17 crc kubenswrapper[4708]: I0227 18:43:17.228316 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:43:17 crc kubenswrapper[4708]: E0227 18:43:17.230620 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:43:30 crc kubenswrapper[4708]: I0227 18:43:30.229231 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:43:30 crc kubenswrapper[4708]: E0227 18:43:30.230756 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:43:41 crc kubenswrapper[4708]: I0227 18:43:41.228367 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:43:41 crc kubenswrapper[4708]: E0227 18:43:41.229824 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:43:56 crc kubenswrapper[4708]: I0227 18:43:56.229338 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:43:56 crc kubenswrapper[4708]: E0227 18:43:56.230421 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.158104 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536964-xlvjp"] Feb 27 18:44:00 crc kubenswrapper[4708]: E0227 18:44:00.159111 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79bf828f-3371-4268-9a09-f647ee2f7716" containerName="oc" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.159126 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="79bf828f-3371-4268-9a09-f647ee2f7716" containerName="oc" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.159347 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="79bf828f-3371-4268-9a09-f647ee2f7716" containerName="oc" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.160190 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536964-xlvjp" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.162653 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.162664 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.164959 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.169453 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536964-xlvjp"] Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.306650 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsv82\" (UniqueName: \"kubernetes.io/projected/6e66e4c2-c6e4-422b-8959-abdfa5e1386f-kube-api-access-fsv82\") pod \"auto-csr-approver-29536964-xlvjp\" (UID: \"6e66e4c2-c6e4-422b-8959-abdfa5e1386f\") " pod="openshift-infra/auto-csr-approver-29536964-xlvjp" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.408794 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsv82\" (UniqueName: \"kubernetes.io/projected/6e66e4c2-c6e4-422b-8959-abdfa5e1386f-kube-api-access-fsv82\") pod \"auto-csr-approver-29536964-xlvjp\" (UID: \"6e66e4c2-c6e4-422b-8959-abdfa5e1386f\") " pod="openshift-infra/auto-csr-approver-29536964-xlvjp" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.439909 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsv82\" (UniqueName: \"kubernetes.io/projected/6e66e4c2-c6e4-422b-8959-abdfa5e1386f-kube-api-access-fsv82\") pod \"auto-csr-approver-29536964-xlvjp\" (UID: \"6e66e4c2-c6e4-422b-8959-abdfa5e1386f\") " pod="openshift-infra/auto-csr-approver-29536964-xlvjp" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.484981 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536964-xlvjp" Feb 27 18:44:00 crc kubenswrapper[4708]: I0227 18:44:00.959213 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536964-xlvjp"] Feb 27 18:44:01 crc kubenswrapper[4708]: I0227 18:44:01.768157 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536964-xlvjp" event={"ID":"6e66e4c2-c6e4-422b-8959-abdfa5e1386f","Type":"ContainerStarted","Data":"276e5fd65a50ed611e6e7f6456c443fcd6d197f9c358b79e3452dbe3ba2a18aa"} Feb 27 18:44:02 crc kubenswrapper[4708]: I0227 18:44:02.781151 4708 generic.go:334] "Generic (PLEG): container finished" podID="6e66e4c2-c6e4-422b-8959-abdfa5e1386f" containerID="039944149c3528909a0b8fbfa5e9741781130d42651d24047cbf94591e356ef6" exitCode=0 Feb 27 18:44:02 crc kubenswrapper[4708]: I0227 18:44:02.781219 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536964-xlvjp" event={"ID":"6e66e4c2-c6e4-422b-8959-abdfa5e1386f","Type":"ContainerDied","Data":"039944149c3528909a0b8fbfa5e9741781130d42651d24047cbf94591e356ef6"} Feb 27 18:44:04 crc kubenswrapper[4708]: I0227 18:44:04.175210 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536964-xlvjp" Feb 27 18:44:04 crc kubenswrapper[4708]: I0227 18:44:04.298034 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsv82\" (UniqueName: \"kubernetes.io/projected/6e66e4c2-c6e4-422b-8959-abdfa5e1386f-kube-api-access-fsv82\") pod \"6e66e4c2-c6e4-422b-8959-abdfa5e1386f\" (UID: \"6e66e4c2-c6e4-422b-8959-abdfa5e1386f\") " Feb 27 18:44:04 crc kubenswrapper[4708]: I0227 18:44:04.304892 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e66e4c2-c6e4-422b-8959-abdfa5e1386f-kube-api-access-fsv82" (OuterVolumeSpecName: "kube-api-access-fsv82") pod "6e66e4c2-c6e4-422b-8959-abdfa5e1386f" (UID: "6e66e4c2-c6e4-422b-8959-abdfa5e1386f"). InnerVolumeSpecName "kube-api-access-fsv82". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:44:04 crc kubenswrapper[4708]: I0227 18:44:04.400683 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsv82\" (UniqueName: \"kubernetes.io/projected/6e66e4c2-c6e4-422b-8959-abdfa5e1386f-kube-api-access-fsv82\") on node \"crc\" DevicePath \"\"" Feb 27 18:44:04 crc kubenswrapper[4708]: I0227 18:44:04.803081 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536964-xlvjp" event={"ID":"6e66e4c2-c6e4-422b-8959-abdfa5e1386f","Type":"ContainerDied","Data":"276e5fd65a50ed611e6e7f6456c443fcd6d197f9c358b79e3452dbe3ba2a18aa"} Feb 27 18:44:04 crc kubenswrapper[4708]: I0227 18:44:04.803928 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="276e5fd65a50ed611e6e7f6456c443fcd6d197f9c358b79e3452dbe3ba2a18aa" Feb 27 18:44:04 crc kubenswrapper[4708]: I0227 18:44:04.803177 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536964-xlvjp" Feb 27 18:44:05 crc kubenswrapper[4708]: I0227 18:44:05.271023 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536958-tsgcc"] Feb 27 18:44:05 crc kubenswrapper[4708]: I0227 18:44:05.280730 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536958-tsgcc"] Feb 27 18:44:06 crc kubenswrapper[4708]: I0227 18:44:06.241933 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1" path="/var/lib/kubelet/pods/406c22a6-f6c6-40a6-aa3e-fdf1fbd194d1/volumes" Feb 27 18:44:07 crc kubenswrapper[4708]: I0227 18:44:07.098032 4708 scope.go:117] "RemoveContainer" containerID="d251394c46c6a4ff863e894b602fe17ffd9016f88dbcc5cf2246d00df8a641a1" Feb 27 18:44:07 crc kubenswrapper[4708]: I0227 18:44:07.229760 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:44:07 crc kubenswrapper[4708]: E0227 18:44:07.230095 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:44:18 crc kubenswrapper[4708]: I0227 18:44:18.229516 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:44:18 crc kubenswrapper[4708]: E0227 18:44:18.230677 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:44:33 crc kubenswrapper[4708]: I0227 18:44:33.229020 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:44:33 crc kubenswrapper[4708]: E0227 18:44:33.230365 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:44:45 crc kubenswrapper[4708]: I0227 18:44:45.228519 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:44:45 crc kubenswrapper[4708]: E0227 18:44:45.230004 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:44:58 crc kubenswrapper[4708]: I0227 18:44:58.229282 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:44:58 crc kubenswrapper[4708]: E0227 18:44:58.230249 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.168043 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958"] Feb 27 18:45:00 crc kubenswrapper[4708]: E0227 18:45:00.169319 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6e66e4c2-c6e4-422b-8959-abdfa5e1386f" containerName="oc" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.169336 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="6e66e4c2-c6e4-422b-8959-abdfa5e1386f" containerName="oc" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.169591 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="6e66e4c2-c6e4-422b-8959-abdfa5e1386f" containerName="oc" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.170575 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.181651 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.181885 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.209298 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958"] Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.295292 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ad7061d-ae06-46d3-8cca-76bc071bfe32-config-volume\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.295906 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ad7061d-ae06-46d3-8cca-76bc071bfe32-secret-volume\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.295952 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7wwx\" (UniqueName: \"kubernetes.io/projected/0ad7061d-ae06-46d3-8cca-76bc071bfe32-kube-api-access-m7wwx\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.398307 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ad7061d-ae06-46d3-8cca-76bc071bfe32-config-volume\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.398408 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m7wwx\" (UniqueName: \"kubernetes.io/projected/0ad7061d-ae06-46d3-8cca-76bc071bfe32-kube-api-access-m7wwx\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.398438 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ad7061d-ae06-46d3-8cca-76bc071bfe32-secret-volume\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.399256 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ad7061d-ae06-46d3-8cca-76bc071bfe32-config-volume\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.408146 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ad7061d-ae06-46d3-8cca-76bc071bfe32-secret-volume\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.425028 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7wwx\" (UniqueName: \"kubernetes.io/projected/0ad7061d-ae06-46d3-8cca-76bc071bfe32-kube-api-access-m7wwx\") pod \"collect-profiles-29536965-sv958\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.515687 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:00 crc kubenswrapper[4708]: I0227 18:45:00.995089 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958"] Feb 27 18:45:01 crc kubenswrapper[4708]: I0227 18:45:01.401205 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" event={"ID":"0ad7061d-ae06-46d3-8cca-76bc071bfe32","Type":"ContainerStarted","Data":"c3856c3add79b215b9d90229b3dae36ead5e4240fc15f251713d331dd4b3694d"} Feb 27 18:45:01 crc kubenswrapper[4708]: I0227 18:45:01.401577 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" event={"ID":"0ad7061d-ae06-46d3-8cca-76bc071bfe32","Type":"ContainerStarted","Data":"4d080b20e05ceac4d5f5c61d353078a1dc885dc9378a89007b67b428afeddbfb"} Feb 27 18:45:01 crc kubenswrapper[4708]: I0227 18:45:01.418655 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" podStartSLOduration=1.41863814 podStartE2EDuration="1.41863814s" podCreationTimestamp="2026-02-27 18:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 18:45:01.415200873 +0000 UTC m=+6699.930998460" watchObservedRunningTime="2026-02-27 18:45:01.41863814 +0000 UTC m=+6699.934435727" Feb 27 18:45:02 crc kubenswrapper[4708]: I0227 18:45:02.409444 4708 generic.go:334] "Generic (PLEG): container finished" podID="0ad7061d-ae06-46d3-8cca-76bc071bfe32" containerID="c3856c3add79b215b9d90229b3dae36ead5e4240fc15f251713d331dd4b3694d" exitCode=0 Feb 27 18:45:02 crc kubenswrapper[4708]: I0227 18:45:02.409490 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" event={"ID":"0ad7061d-ae06-46d3-8cca-76bc071bfe32","Type":"ContainerDied","Data":"c3856c3add79b215b9d90229b3dae36ead5e4240fc15f251713d331dd4b3694d"} Feb 27 18:45:03 crc kubenswrapper[4708]: I0227 18:45:03.888556 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:03 crc kubenswrapper[4708]: I0227 18:45:03.983282 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7wwx\" (UniqueName: \"kubernetes.io/projected/0ad7061d-ae06-46d3-8cca-76bc071bfe32-kube-api-access-m7wwx\") pod \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " Feb 27 18:45:03 crc kubenswrapper[4708]: I0227 18:45:03.983358 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ad7061d-ae06-46d3-8cca-76bc071bfe32-secret-volume\") pod \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " Feb 27 18:45:03 crc kubenswrapper[4708]: I0227 18:45:03.983560 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ad7061d-ae06-46d3-8cca-76bc071bfe32-config-volume\") pod \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\" (UID: \"0ad7061d-ae06-46d3-8cca-76bc071bfe32\") " Feb 27 18:45:03 crc kubenswrapper[4708]: I0227 18:45:03.985021 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ad7061d-ae06-46d3-8cca-76bc071bfe32-config-volume" (OuterVolumeSpecName: "config-volume") pod "0ad7061d-ae06-46d3-8cca-76bc071bfe32" (UID: "0ad7061d-ae06-46d3-8cca-76bc071bfe32"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 18:45:03 crc kubenswrapper[4708]: I0227 18:45:03.991918 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ad7061d-ae06-46d3-8cca-76bc071bfe32-kube-api-access-m7wwx" (OuterVolumeSpecName: "kube-api-access-m7wwx") pod "0ad7061d-ae06-46d3-8cca-76bc071bfe32" (UID: "0ad7061d-ae06-46d3-8cca-76bc071bfe32"). InnerVolumeSpecName "kube-api-access-m7wwx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:45:03 crc kubenswrapper[4708]: I0227 18:45:03.998109 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0ad7061d-ae06-46d3-8cca-76bc071bfe32-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0ad7061d-ae06-46d3-8cca-76bc071bfe32" (UID: "0ad7061d-ae06-46d3-8cca-76bc071bfe32"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.086252 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m7wwx\" (UniqueName: \"kubernetes.io/projected/0ad7061d-ae06-46d3-8cca-76bc071bfe32-kube-api-access-m7wwx\") on node \"crc\" DevicePath \"\"" Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.086312 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0ad7061d-ae06-46d3-8cca-76bc071bfe32-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.086332 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ad7061d-ae06-46d3-8cca-76bc071bfe32-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 18:45:04 crc kubenswrapper[4708]: E0227 18:45:04.430167 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ad7061d_ae06_46d3_8cca_76bc071bfe32.slice/crio-4d080b20e05ceac4d5f5c61d353078a1dc885dc9378a89007b67b428afeddbfb\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0ad7061d_ae06_46d3_8cca_76bc071bfe32.slice\": RecentStats: unable to find data in memory cache]" Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.433097 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" event={"ID":"0ad7061d-ae06-46d3-8cca-76bc071bfe32","Type":"ContainerDied","Data":"4d080b20e05ceac4d5f5c61d353078a1dc885dc9378a89007b67b428afeddbfb"} Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.433133 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958" Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.433136 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4d080b20e05ceac4d5f5c61d353078a1dc885dc9378a89007b67b428afeddbfb" Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.507676 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv"] Feb 27 18:45:04 crc kubenswrapper[4708]: I0227 18:45:04.516194 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536920-2zcgv"] Feb 27 18:45:06 crc kubenswrapper[4708]: I0227 18:45:06.239457 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f" path="/var/lib/kubelet/pods/8c2e5bdc-0049-41dc-8cc7-cfb16ed96b4f/volumes" Feb 27 18:45:07 crc kubenswrapper[4708]: I0227 18:45:07.200433 4708 scope.go:117] "RemoveContainer" containerID="3f5676a035056e250c17dc3330b09159457e45c52441a457959606a4d006da1e" Feb 27 18:45:10 crc kubenswrapper[4708]: I0227 18:45:10.228141 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:45:10 crc kubenswrapper[4708]: E0227 18:45:10.229229 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:45:22 crc kubenswrapper[4708]: I0227 18:45:22.991416 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5hsh8"] Feb 27 18:45:22 crc kubenswrapper[4708]: E0227 18:45:22.993312 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0ad7061d-ae06-46d3-8cca-76bc071bfe32" containerName="collect-profiles" Feb 27 18:45:22 crc kubenswrapper[4708]: I0227 18:45:22.993330 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0ad7061d-ae06-46d3-8cca-76bc071bfe32" containerName="collect-profiles" Feb 27 18:45:22 crc kubenswrapper[4708]: I0227 18:45:22.993777 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0ad7061d-ae06-46d3-8cca-76bc071bfe32" containerName="collect-profiles" Feb 27 18:45:22 crc kubenswrapper[4708]: I0227 18:45:22.999317 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.012396 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hsh8"] Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.131331 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2r8f5\" (UniqueName: \"kubernetes.io/projected/84544043-05df-4807-b270-b0f58ca8d995-kube-api-access-2r8f5\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.131812 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-catalog-content\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.131891 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-utilities\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.233471 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2r8f5\" (UniqueName: \"kubernetes.io/projected/84544043-05df-4807-b270-b0f58ca8d995-kube-api-access-2r8f5\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.233521 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-catalog-content\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.233580 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-utilities\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.234175 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-utilities\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.234174 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-catalog-content\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.256035 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2r8f5\" (UniqueName: \"kubernetes.io/projected/84544043-05df-4807-b270-b0f58ca8d995-kube-api-access-2r8f5\") pod \"redhat-marketplace-5hsh8\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.334755 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:23 crc kubenswrapper[4708]: I0227 18:45:23.793707 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hsh8"] Feb 27 18:45:24 crc kubenswrapper[4708]: I0227 18:45:24.228360 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:45:24 crc kubenswrapper[4708]: E0227 18:45:24.228666 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:45:24 crc kubenswrapper[4708]: I0227 18:45:24.652948 4708 generic.go:334] "Generic (PLEG): container finished" podID="84544043-05df-4807-b270-b0f58ca8d995" containerID="7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2" exitCode=0 Feb 27 18:45:24 crc kubenswrapper[4708]: I0227 18:45:24.652992 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hsh8" event={"ID":"84544043-05df-4807-b270-b0f58ca8d995","Type":"ContainerDied","Data":"7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2"} Feb 27 18:45:24 crc kubenswrapper[4708]: I0227 18:45:24.653016 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hsh8" event={"ID":"84544043-05df-4807-b270-b0f58ca8d995","Type":"ContainerStarted","Data":"22adb6687d9de98b2bfb645834f81d28d23b5e8237840eb1ab9d38bfb2ee2a17"} Feb 27 18:45:24 crc kubenswrapper[4708]: I0227 18:45:24.655427 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:45:26 crc kubenswrapper[4708]: I0227 18:45:26.678191 4708 generic.go:334] "Generic (PLEG): container finished" podID="84544043-05df-4807-b270-b0f58ca8d995" containerID="d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5" exitCode=0 Feb 27 18:45:26 crc kubenswrapper[4708]: I0227 18:45:26.678291 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hsh8" event={"ID":"84544043-05df-4807-b270-b0f58ca8d995","Type":"ContainerDied","Data":"d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5"} Feb 27 18:45:27 crc kubenswrapper[4708]: I0227 18:45:27.693720 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hsh8" event={"ID":"84544043-05df-4807-b270-b0f58ca8d995","Type":"ContainerStarted","Data":"8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c"} Feb 27 18:45:27 crc kubenswrapper[4708]: I0227 18:45:27.719545 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5hsh8" podStartSLOduration=3.269497656 podStartE2EDuration="5.719528029s" podCreationTimestamp="2026-02-27 18:45:22 +0000 UTC" firstStartedPulling="2026-02-27 18:45:24.655204195 +0000 UTC m=+6723.171001782" lastFinishedPulling="2026-02-27 18:45:27.105234568 +0000 UTC m=+6725.621032155" observedRunningTime="2026-02-27 18:45:27.714918209 +0000 UTC m=+6726.230715796" watchObservedRunningTime="2026-02-27 18:45:27.719528029 +0000 UTC m=+6726.235325616" Feb 27 18:45:33 crc kubenswrapper[4708]: I0227 18:45:33.335466 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:33 crc kubenswrapper[4708]: I0227 18:45:33.336395 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:33 crc kubenswrapper[4708]: I0227 18:45:33.390829 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:33 crc kubenswrapper[4708]: I0227 18:45:33.843395 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:33 crc kubenswrapper[4708]: I0227 18:45:33.928729 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hsh8"] Feb 27 18:45:35 crc kubenswrapper[4708]: I0227 18:45:35.804015 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5hsh8" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="registry-server" containerID="cri-o://8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c" gracePeriod=2 Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.436213 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.461246 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-catalog-content\") pod \"84544043-05df-4807-b270-b0f58ca8d995\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.461396 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2r8f5\" (UniqueName: \"kubernetes.io/projected/84544043-05df-4807-b270-b0f58ca8d995-kube-api-access-2r8f5\") pod \"84544043-05df-4807-b270-b0f58ca8d995\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.461636 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-utilities\") pod \"84544043-05df-4807-b270-b0f58ca8d995\" (UID: \"84544043-05df-4807-b270-b0f58ca8d995\") " Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.463192 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-utilities" (OuterVolumeSpecName: "utilities") pod "84544043-05df-4807-b270-b0f58ca8d995" (UID: "84544043-05df-4807-b270-b0f58ca8d995"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.472197 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84544043-05df-4807-b270-b0f58ca8d995-kube-api-access-2r8f5" (OuterVolumeSpecName: "kube-api-access-2r8f5") pod "84544043-05df-4807-b270-b0f58ca8d995" (UID: "84544043-05df-4807-b270-b0f58ca8d995"). InnerVolumeSpecName "kube-api-access-2r8f5". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.488631 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "84544043-05df-4807-b270-b0f58ca8d995" (UID: "84544043-05df-4807-b270-b0f58ca8d995"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.564269 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2r8f5\" (UniqueName: \"kubernetes.io/projected/84544043-05df-4807-b270-b0f58ca8d995-kube-api-access-2r8f5\") on node \"crc\" DevicePath \"\"" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.564306 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.564316 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/84544043-05df-4807-b270-b0f58ca8d995-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.820176 4708 generic.go:334] "Generic (PLEG): container finished" podID="84544043-05df-4807-b270-b0f58ca8d995" containerID="8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c" exitCode=0 Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.820227 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hsh8" event={"ID":"84544043-05df-4807-b270-b0f58ca8d995","Type":"ContainerDied","Data":"8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c"} Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.820256 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5hsh8" event={"ID":"84544043-05df-4807-b270-b0f58ca8d995","Type":"ContainerDied","Data":"22adb6687d9de98b2bfb645834f81d28d23b5e8237840eb1ab9d38bfb2ee2a17"} Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.820274 4708 scope.go:117] "RemoveContainer" containerID="8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.820267 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5hsh8" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.858310 4708 scope.go:117] "RemoveContainer" containerID="d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.869977 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hsh8"] Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.887740 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5hsh8"] Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.888615 4708 scope.go:117] "RemoveContainer" containerID="7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.939477 4708 scope.go:117] "RemoveContainer" containerID="8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c" Feb 27 18:45:36 crc kubenswrapper[4708]: E0227 18:45:36.940302 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c\": container with ID starting with 8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c not found: ID does not exist" containerID="8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.940368 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c"} err="failed to get container status \"8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c\": rpc error: code = NotFound desc = could not find container \"8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c\": container with ID starting with 8463b2d135c42addc7ddad774e65548ea5648aa9950ee79960c83b574820fc0c not found: ID does not exist" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.940415 4708 scope.go:117] "RemoveContainer" containerID="d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5" Feb 27 18:45:36 crc kubenswrapper[4708]: E0227 18:45:36.941084 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5\": container with ID starting with d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5 not found: ID does not exist" containerID="d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.941186 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5"} err="failed to get container status \"d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5\": rpc error: code = NotFound desc = could not find container \"d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5\": container with ID starting with d5706259c32b6e9778f214410d282670d2da8f36347ab3869ed172c84452d0f5 not found: ID does not exist" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.941245 4708 scope.go:117] "RemoveContainer" containerID="7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2" Feb 27 18:45:36 crc kubenswrapper[4708]: E0227 18:45:36.942051 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2\": container with ID starting with 7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2 not found: ID does not exist" containerID="7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2" Feb 27 18:45:36 crc kubenswrapper[4708]: I0227 18:45:36.942139 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2"} err="failed to get container status \"7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2\": rpc error: code = NotFound desc = could not find container \"7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2\": container with ID starting with 7c7733ccc6a2f973b4a66102f5fec0c2287820b5ab87456e2c5d3be371dfdfb2 not found: ID does not exist" Feb 27 18:45:38 crc kubenswrapper[4708]: I0227 18:45:38.228378 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:45:38 crc kubenswrapper[4708]: E0227 18:45:38.228659 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:45:38 crc kubenswrapper[4708]: I0227 18:45:38.242147 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84544043-05df-4807-b270-b0f58ca8d995" path="/var/lib/kubelet/pods/84544043-05df-4807-b270-b0f58ca8d995/volumes" Feb 27 18:45:52 crc kubenswrapper[4708]: I0227 18:45:52.228523 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:45:52 crc kubenswrapper[4708]: E0227 18:45:52.229692 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.151753 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536966-k2vpz"] Feb 27 18:46:00 crc kubenswrapper[4708]: E0227 18:46:00.153318 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="extract-content" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.153338 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="extract-content" Feb 27 18:46:00 crc kubenswrapper[4708]: E0227 18:46:00.153376 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="registry-server" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.153387 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="registry-server" Feb 27 18:46:00 crc kubenswrapper[4708]: E0227 18:46:00.153407 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="extract-utilities" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.153415 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="extract-utilities" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.153673 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="84544043-05df-4807-b270-b0f58ca8d995" containerName="registry-server" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.154599 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536966-k2vpz" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.156969 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.156974 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.157198 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.160542 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536966-k2vpz"] Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.229790 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmhc8\" (UniqueName: \"kubernetes.io/projected/cf726d56-35b5-4c1d-be1c-ded0e4d30ca2-kube-api-access-wmhc8\") pod \"auto-csr-approver-29536966-k2vpz\" (UID: \"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2\") " pod="openshift-infra/auto-csr-approver-29536966-k2vpz" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.332089 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wmhc8\" (UniqueName: \"kubernetes.io/projected/cf726d56-35b5-4c1d-be1c-ded0e4d30ca2-kube-api-access-wmhc8\") pod \"auto-csr-approver-29536966-k2vpz\" (UID: \"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2\") " pod="openshift-infra/auto-csr-approver-29536966-k2vpz" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.352990 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmhc8\" (UniqueName: \"kubernetes.io/projected/cf726d56-35b5-4c1d-be1c-ded0e4d30ca2-kube-api-access-wmhc8\") pod \"auto-csr-approver-29536966-k2vpz\" (UID: \"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2\") " pod="openshift-infra/auto-csr-approver-29536966-k2vpz" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.477202 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536966-k2vpz" Feb 27 18:46:00 crc kubenswrapper[4708]: I0227 18:46:00.880226 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536966-k2vpz"] Feb 27 18:46:00 crc kubenswrapper[4708]: W0227 18:46:00.886178 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcf726d56_35b5_4c1d_be1c_ded0e4d30ca2.slice/crio-c86b493bdcd494877eda67822912bde9e06c0d71a0ef54c2b744b76c374005d5 WatchSource:0}: Error finding container c86b493bdcd494877eda67822912bde9e06c0d71a0ef54c2b744b76c374005d5: Status 404 returned error can't find the container with id c86b493bdcd494877eda67822912bde9e06c0d71a0ef54c2b744b76c374005d5 Feb 27 18:46:01 crc kubenswrapper[4708]: I0227 18:46:01.103751 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536966-k2vpz" event={"ID":"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2","Type":"ContainerStarted","Data":"c86b493bdcd494877eda67822912bde9e06c0d71a0ef54c2b744b76c374005d5"} Feb 27 18:46:03 crc kubenswrapper[4708]: I0227 18:46:03.129114 4708 generic.go:334] "Generic (PLEG): container finished" podID="cf726d56-35b5-4c1d-be1c-ded0e4d30ca2" containerID="8165d8fdafef9ea437773c43d21d1ef4eeb53e669e6666200950ba58db4f630c" exitCode=0 Feb 27 18:46:03 crc kubenswrapper[4708]: I0227 18:46:03.129674 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536966-k2vpz" event={"ID":"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2","Type":"ContainerDied","Data":"8165d8fdafef9ea437773c43d21d1ef4eeb53e669e6666200950ba58db4f630c"} Feb 27 18:46:03 crc kubenswrapper[4708]: I0227 18:46:03.229803 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:46:03 crc kubenswrapper[4708]: E0227 18:46:03.230448 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:46:04 crc kubenswrapper[4708]: I0227 18:46:04.596810 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536966-k2vpz" Feb 27 18:46:04 crc kubenswrapper[4708]: I0227 18:46:04.622294 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wmhc8\" (UniqueName: \"kubernetes.io/projected/cf726d56-35b5-4c1d-be1c-ded0e4d30ca2-kube-api-access-wmhc8\") pod \"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2\" (UID: \"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2\") " Feb 27 18:46:04 crc kubenswrapper[4708]: I0227 18:46:04.629285 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cf726d56-35b5-4c1d-be1c-ded0e4d30ca2-kube-api-access-wmhc8" (OuterVolumeSpecName: "kube-api-access-wmhc8") pod "cf726d56-35b5-4c1d-be1c-ded0e4d30ca2" (UID: "cf726d56-35b5-4c1d-be1c-ded0e4d30ca2"). InnerVolumeSpecName "kube-api-access-wmhc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:04 crc kubenswrapper[4708]: I0227 18:46:04.724668 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wmhc8\" (UniqueName: \"kubernetes.io/projected/cf726d56-35b5-4c1d-be1c-ded0e4d30ca2-kube-api-access-wmhc8\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:05 crc kubenswrapper[4708]: I0227 18:46:05.153626 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536966-k2vpz" event={"ID":"cf726d56-35b5-4c1d-be1c-ded0e4d30ca2","Type":"ContainerDied","Data":"c86b493bdcd494877eda67822912bde9e06c0d71a0ef54c2b744b76c374005d5"} Feb 27 18:46:05 crc kubenswrapper[4708]: I0227 18:46:05.153675 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c86b493bdcd494877eda67822912bde9e06c0d71a0ef54c2b744b76c374005d5" Feb 27 18:46:05 crc kubenswrapper[4708]: I0227 18:46:05.153700 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536966-k2vpz" Feb 27 18:46:05 crc kubenswrapper[4708]: I0227 18:46:05.694827 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536960-wb62j"] Feb 27 18:46:05 crc kubenswrapper[4708]: I0227 18:46:05.707266 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536960-wb62j"] Feb 27 18:46:06 crc kubenswrapper[4708]: I0227 18:46:06.239788 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23538ea1-6ab7-4e9f-85b4-858f93b5ac57" path="/var/lib/kubelet/pods/23538ea1-6ab7-4e9f-85b4-858f93b5ac57/volumes" Feb 27 18:46:07 crc kubenswrapper[4708]: I0227 18:46:07.274759 4708 scope.go:117] "RemoveContainer" containerID="65827f0a9f6deb03e43794efecc4a41b96c0c224977c289c0f42fc4645345913" Feb 27 18:46:14 crc kubenswrapper[4708]: I0227 18:46:14.871652 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k7vgp"] Feb 27 18:46:14 crc kubenswrapper[4708]: E0227 18:46:14.872676 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cf726d56-35b5-4c1d-be1c-ded0e4d30ca2" containerName="oc" Feb 27 18:46:14 crc kubenswrapper[4708]: I0227 18:46:14.872689 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cf726d56-35b5-4c1d-be1c-ded0e4d30ca2" containerName="oc" Feb 27 18:46:14 crc kubenswrapper[4708]: I0227 18:46:14.872934 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="cf726d56-35b5-4c1d-be1c-ded0e4d30ca2" containerName="oc" Feb 27 18:46:14 crc kubenswrapper[4708]: I0227 18:46:14.874386 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:14 crc kubenswrapper[4708]: I0227 18:46:14.884596 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7vgp"] Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.022890 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-catalog-content\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.023046 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln8np\" (UniqueName: \"kubernetes.io/projected/beccdce3-d1b9-4558-b649-3edfd12eec55-kube-api-access-ln8np\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.023096 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-utilities\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.124595 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ln8np\" (UniqueName: \"kubernetes.io/projected/beccdce3-d1b9-4558-b649-3edfd12eec55-kube-api-access-ln8np\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.124952 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-utilities\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.125153 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-catalog-content\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.125689 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-utilities\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.125714 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-catalog-content\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.144683 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ln8np\" (UniqueName: \"kubernetes.io/projected/beccdce3-d1b9-4558-b649-3edfd12eec55-kube-api-access-ln8np\") pod \"redhat-operators-k7vgp\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.194503 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.236935 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:46:15 crc kubenswrapper[4708]: E0227 18:46:15.237422 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:46:15 crc kubenswrapper[4708]: I0227 18:46:15.720815 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k7vgp"] Feb 27 18:46:16 crc kubenswrapper[4708]: I0227 18:46:16.275259 4708 generic.go:334] "Generic (PLEG): container finished" podID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerID="3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1" exitCode=0 Feb 27 18:46:16 crc kubenswrapper[4708]: I0227 18:46:16.275596 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vgp" event={"ID":"beccdce3-d1b9-4558-b649-3edfd12eec55","Type":"ContainerDied","Data":"3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1"} Feb 27 18:46:16 crc kubenswrapper[4708]: I0227 18:46:16.275622 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vgp" event={"ID":"beccdce3-d1b9-4558-b649-3edfd12eec55","Type":"ContainerStarted","Data":"6594908a1c9563f41186a88212adb776fd4ad6c901b708cd001a6a32c7534b1f"} Feb 27 18:46:17 crc kubenswrapper[4708]: I0227 18:46:17.286185 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vgp" event={"ID":"beccdce3-d1b9-4558-b649-3edfd12eec55","Type":"ContainerStarted","Data":"21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508"} Feb 27 18:46:20 crc kubenswrapper[4708]: I0227 18:46:20.318716 4708 generic.go:334] "Generic (PLEG): container finished" podID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerID="21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508" exitCode=0 Feb 27 18:46:20 crc kubenswrapper[4708]: I0227 18:46:20.318783 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vgp" event={"ID":"beccdce3-d1b9-4558-b649-3edfd12eec55","Type":"ContainerDied","Data":"21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508"} Feb 27 18:46:21 crc kubenswrapper[4708]: I0227 18:46:21.333474 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vgp" event={"ID":"beccdce3-d1b9-4558-b649-3edfd12eec55","Type":"ContainerStarted","Data":"4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d"} Feb 27 18:46:21 crc kubenswrapper[4708]: I0227 18:46:21.369112 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k7vgp" podStartSLOduration=2.884288609 podStartE2EDuration="7.369093608s" podCreationTimestamp="2026-02-27 18:46:14 +0000 UTC" firstStartedPulling="2026-02-27 18:46:16.279336612 +0000 UTC m=+6774.795134199" lastFinishedPulling="2026-02-27 18:46:20.764141601 +0000 UTC m=+6779.279939198" observedRunningTime="2026-02-27 18:46:21.362150011 +0000 UTC m=+6779.877947638" watchObservedRunningTime="2026-02-27 18:46:21.369093608 +0000 UTC m=+6779.884891235" Feb 27 18:46:25 crc kubenswrapper[4708]: I0227 18:46:25.195143 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:25 crc kubenswrapper[4708]: I0227 18:46:25.196218 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:26 crc kubenswrapper[4708]: I0227 18:46:26.248328 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k7vgp" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="registry-server" probeResult="failure" output=< Feb 27 18:46:26 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:46:26 crc kubenswrapper[4708]: > Feb 27 18:46:27 crc kubenswrapper[4708]: I0227 18:46:27.228724 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:46:27 crc kubenswrapper[4708]: E0227 18:46:27.229636 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:46:35 crc kubenswrapper[4708]: I0227 18:46:35.267093 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:35 crc kubenswrapper[4708]: I0227 18:46:35.333965 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:35 crc kubenswrapper[4708]: I0227 18:46:35.523829 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7vgp"] Feb 27 18:46:36 crc kubenswrapper[4708]: I0227 18:46:36.494741 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k7vgp" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="registry-server" containerID="cri-o://4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d" gracePeriod=2 Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.048739 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.151715 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-utilities\") pod \"beccdce3-d1b9-4558-b649-3edfd12eec55\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.151920 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-catalog-content\") pod \"beccdce3-d1b9-4558-b649-3edfd12eec55\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.152022 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ln8np\" (UniqueName: \"kubernetes.io/projected/beccdce3-d1b9-4558-b649-3edfd12eec55-kube-api-access-ln8np\") pod \"beccdce3-d1b9-4558-b649-3edfd12eec55\" (UID: \"beccdce3-d1b9-4558-b649-3edfd12eec55\") " Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.152728 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-utilities" (OuterVolumeSpecName: "utilities") pod "beccdce3-d1b9-4558-b649-3edfd12eec55" (UID: "beccdce3-d1b9-4558-b649-3edfd12eec55"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.159060 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/beccdce3-d1b9-4558-b649-3edfd12eec55-kube-api-access-ln8np" (OuterVolumeSpecName: "kube-api-access-ln8np") pod "beccdce3-d1b9-4558-b649-3edfd12eec55" (UID: "beccdce3-d1b9-4558-b649-3edfd12eec55"). InnerVolumeSpecName "kube-api-access-ln8np". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.254549 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ln8np\" (UniqueName: \"kubernetes.io/projected/beccdce3-d1b9-4558-b649-3edfd12eec55-kube-api-access-ln8np\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.254583 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.294231 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "beccdce3-d1b9-4558-b649-3edfd12eec55" (UID: "beccdce3-d1b9-4558-b649-3edfd12eec55"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.359456 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/beccdce3-d1b9-4558-b649-3edfd12eec55-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.525682 4708 generic.go:334] "Generic (PLEG): container finished" podID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerID="4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d" exitCode=0 Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.525742 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vgp" event={"ID":"beccdce3-d1b9-4558-b649-3edfd12eec55","Type":"ContainerDied","Data":"4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d"} Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.525781 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k7vgp" event={"ID":"beccdce3-d1b9-4558-b649-3edfd12eec55","Type":"ContainerDied","Data":"6594908a1c9563f41186a88212adb776fd4ad6c901b708cd001a6a32c7534b1f"} Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.525809 4708 scope.go:117] "RemoveContainer" containerID="4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.525843 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k7vgp" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.558274 4708 scope.go:117] "RemoveContainer" containerID="21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.581276 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k7vgp"] Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.591904 4708 scope.go:117] "RemoveContainer" containerID="3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.592169 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k7vgp"] Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.637507 4708 scope.go:117] "RemoveContainer" containerID="4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d" Feb 27 18:46:37 crc kubenswrapper[4708]: E0227 18:46:37.638106 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d\": container with ID starting with 4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d not found: ID does not exist" containerID="4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.638231 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d"} err="failed to get container status \"4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d\": rpc error: code = NotFound desc = could not find container \"4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d\": container with ID starting with 4ffee6740ce7d5103e302a3ec6cd4aca9b690bb36a2845fe18aeec245edfee7d not found: ID does not exist" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.638311 4708 scope.go:117] "RemoveContainer" containerID="21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508" Feb 27 18:46:37 crc kubenswrapper[4708]: E0227 18:46:37.638803 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508\": container with ID starting with 21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508 not found: ID does not exist" containerID="21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.638888 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508"} err="failed to get container status \"21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508\": rpc error: code = NotFound desc = could not find container \"21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508\": container with ID starting with 21eefa60635f7561508301cd2aef2f11f0e0edc05350fb9985905c9fe5f00508 not found: ID does not exist" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.638923 4708 scope.go:117] "RemoveContainer" containerID="3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1" Feb 27 18:46:37 crc kubenswrapper[4708]: E0227 18:46:37.639594 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1\": container with ID starting with 3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1 not found: ID does not exist" containerID="3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1" Feb 27 18:46:37 crc kubenswrapper[4708]: I0227 18:46:37.639697 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1"} err="failed to get container status \"3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1\": rpc error: code = NotFound desc = could not find container \"3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1\": container with ID starting with 3fe6e3b7d405f72065fb49e82d3c387516eaa8b8cd07ae3848746eb14db980b1 not found: ID does not exist" Feb 27 18:46:38 crc kubenswrapper[4708]: I0227 18:46:38.247536 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" path="/var/lib/kubelet/pods/beccdce3-d1b9-4558-b649-3edfd12eec55/volumes" Feb 27 18:46:40 crc kubenswrapper[4708]: I0227 18:46:40.229093 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:46:40 crc kubenswrapper[4708]: E0227 18:46:40.231241 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:46:51 crc kubenswrapper[4708]: I0227 18:46:51.229192 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:46:51 crc kubenswrapper[4708]: E0227 18:46:51.230512 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:47:03 crc kubenswrapper[4708]: I0227 18:47:03.229174 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:47:03 crc kubenswrapper[4708]: E0227 18:47:03.230339 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:47:17 crc kubenswrapper[4708]: I0227 18:47:17.229214 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:47:17 crc kubenswrapper[4708]: E0227 18:47:17.230324 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:47:29 crc kubenswrapper[4708]: I0227 18:47:29.228799 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:47:29 crc kubenswrapper[4708]: E0227 18:47:29.229602 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:47:43 crc kubenswrapper[4708]: I0227 18:47:43.228726 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:47:44 crc kubenswrapper[4708]: I0227 18:47:44.357969 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"02670e62352cea09948961e7b31355646c61d95bd708fd2408e9c0930269613b"} Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.156682 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536968-cx75j"] Feb 27 18:48:00 crc kubenswrapper[4708]: E0227 18:48:00.157774 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="extract-content" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.157813 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="extract-content" Feb 27 18:48:00 crc kubenswrapper[4708]: E0227 18:48:00.157831 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="extract-utilities" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.157839 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="extract-utilities" Feb 27 18:48:00 crc kubenswrapper[4708]: E0227 18:48:00.157869 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="registry-server" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.157876 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="registry-server" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.158106 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="beccdce3-d1b9-4558-b649-3edfd12eec55" containerName="registry-server" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.159036 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-cx75j" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.161236 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.161699 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.163647 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.169826 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-cx75j"] Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.323983 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z96db\" (UniqueName: \"kubernetes.io/projected/c6636544-dc9f-4303-9b0d-d11f4ad26518-kube-api-access-z96db\") pod \"auto-csr-approver-29536968-cx75j\" (UID: \"c6636544-dc9f-4303-9b0d-d11f4ad26518\") " pod="openshift-infra/auto-csr-approver-29536968-cx75j" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.425481 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z96db\" (UniqueName: \"kubernetes.io/projected/c6636544-dc9f-4303-9b0d-d11f4ad26518-kube-api-access-z96db\") pod \"auto-csr-approver-29536968-cx75j\" (UID: \"c6636544-dc9f-4303-9b0d-d11f4ad26518\") " pod="openshift-infra/auto-csr-approver-29536968-cx75j" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.449275 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z96db\" (UniqueName: \"kubernetes.io/projected/c6636544-dc9f-4303-9b0d-d11f4ad26518-kube-api-access-z96db\") pod \"auto-csr-approver-29536968-cx75j\" (UID: \"c6636544-dc9f-4303-9b0d-d11f4ad26518\") " pod="openshift-infra/auto-csr-approver-29536968-cx75j" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.488331 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-cx75j" Feb 27 18:48:00 crc kubenswrapper[4708]: I0227 18:48:00.978376 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-cx75j"] Feb 27 18:48:01 crc kubenswrapper[4708]: I0227 18:48:01.538913 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536968-cx75j" event={"ID":"c6636544-dc9f-4303-9b0d-d11f4ad26518","Type":"ContainerStarted","Data":"1ee21b503dd64d30b05fb359f5c5690f1a930c441299418a30656a2a8b392c2c"} Feb 27 18:48:03 crc kubenswrapper[4708]: I0227 18:48:03.569830 4708 generic.go:334] "Generic (PLEG): container finished" podID="c6636544-dc9f-4303-9b0d-d11f4ad26518" containerID="02653b3f26948da5318eff30950ac38b2a9b8db79d9c226182c8ea5e46ce0eca" exitCode=0 Feb 27 18:48:03 crc kubenswrapper[4708]: I0227 18:48:03.569925 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536968-cx75j" event={"ID":"c6636544-dc9f-4303-9b0d-d11f4ad26518","Type":"ContainerDied","Data":"02653b3f26948da5318eff30950ac38b2a9b8db79d9c226182c8ea5e46ce0eca"} Feb 27 18:48:05 crc kubenswrapper[4708]: I0227 18:48:05.002242 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-cx75j" Feb 27 18:48:05 crc kubenswrapper[4708]: I0227 18:48:05.127701 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z96db\" (UniqueName: \"kubernetes.io/projected/c6636544-dc9f-4303-9b0d-d11f4ad26518-kube-api-access-z96db\") pod \"c6636544-dc9f-4303-9b0d-d11f4ad26518\" (UID: \"c6636544-dc9f-4303-9b0d-d11f4ad26518\") " Feb 27 18:48:05 crc kubenswrapper[4708]: I0227 18:48:05.135048 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6636544-dc9f-4303-9b0d-d11f4ad26518-kube-api-access-z96db" (OuterVolumeSpecName: "kube-api-access-z96db") pod "c6636544-dc9f-4303-9b0d-d11f4ad26518" (UID: "c6636544-dc9f-4303-9b0d-d11f4ad26518"). InnerVolumeSpecName "kube-api-access-z96db". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:48:05 crc kubenswrapper[4708]: I0227 18:48:05.229831 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z96db\" (UniqueName: \"kubernetes.io/projected/c6636544-dc9f-4303-9b0d-d11f4ad26518-kube-api-access-z96db\") on node \"crc\" DevicePath \"\"" Feb 27 18:48:05 crc kubenswrapper[4708]: I0227 18:48:05.592049 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536968-cx75j" event={"ID":"c6636544-dc9f-4303-9b0d-d11f4ad26518","Type":"ContainerDied","Data":"1ee21b503dd64d30b05fb359f5c5690f1a930c441299418a30656a2a8b392c2c"} Feb 27 18:48:05 crc kubenswrapper[4708]: I0227 18:48:05.592100 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee21b503dd64d30b05fb359f5c5690f1a930c441299418a30656a2a8b392c2c" Feb 27 18:48:05 crc kubenswrapper[4708]: I0227 18:48:05.592165 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536968-cx75j" Feb 27 18:48:06 crc kubenswrapper[4708]: I0227 18:48:06.129681 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536962-rbrxs"] Feb 27 18:48:06 crc kubenswrapper[4708]: I0227 18:48:06.142649 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536962-rbrxs"] Feb 27 18:48:06 crc kubenswrapper[4708]: I0227 18:48:06.241717 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79bf828f-3371-4268-9a09-f647ee2f7716" path="/var/lib/kubelet/pods/79bf828f-3371-4268-9a09-f647ee2f7716/volumes" Feb 27 18:48:07 crc kubenswrapper[4708]: I0227 18:48:07.440931 4708 scope.go:117] "RemoveContainer" containerID="986fda1fff6e551f9a1d5fcf85943c2e72ae31387282b060a24837d7a003e6f3" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.568631 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jqk9z"] Feb 27 18:49:38 crc kubenswrapper[4708]: E0227 18:49:38.570448 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6636544-dc9f-4303-9b0d-d11f4ad26518" containerName="oc" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.570464 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6636544-dc9f-4303-9b0d-d11f4ad26518" containerName="oc" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.570839 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6636544-dc9f-4303-9b0d-d11f4ad26518" containerName="oc" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.573124 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.597708 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jqk9z"] Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.609703 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nqx4\" (UniqueName: \"kubernetes.io/projected/483ecb82-700c-4ece-9885-71a38c0b5c1a-kube-api-access-9nqx4\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.609744 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-catalog-content\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.610140 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-utilities\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.712573 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-utilities\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.712704 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nqx4\" (UniqueName: \"kubernetes.io/projected/483ecb82-700c-4ece-9885-71a38c0b5c1a-kube-api-access-9nqx4\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.712733 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-catalog-content\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.713450 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-utilities\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.713492 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-catalog-content\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.732705 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nqx4\" (UniqueName: \"kubernetes.io/projected/483ecb82-700c-4ece-9885-71a38c0b5c1a-kube-api-access-9nqx4\") pod \"community-operators-jqk9z\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:38 crc kubenswrapper[4708]: I0227 18:49:38.925215 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:39 crc kubenswrapper[4708]: I0227 18:49:39.449218 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jqk9z"] Feb 27 18:49:39 crc kubenswrapper[4708]: I0227 18:49:39.799515 4708 generic.go:334] "Generic (PLEG): container finished" podID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerID="9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e" exitCode=0 Feb 27 18:49:39 crc kubenswrapper[4708]: I0227 18:49:39.799749 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jqk9z" event={"ID":"483ecb82-700c-4ece-9885-71a38c0b5c1a","Type":"ContainerDied","Data":"9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e"} Feb 27 18:49:39 crc kubenswrapper[4708]: I0227 18:49:39.799866 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jqk9z" event={"ID":"483ecb82-700c-4ece-9885-71a38c0b5c1a","Type":"ContainerStarted","Data":"88f9a36e798789bdd07738e60b3498cf28b7852735d76d56f43295c47f8e76cc"} Feb 27 18:49:41 crc kubenswrapper[4708]: I0227 18:49:41.842443 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jqk9z" event={"ID":"483ecb82-700c-4ece-9885-71a38c0b5c1a","Type":"ContainerStarted","Data":"5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74"} Feb 27 18:49:42 crc kubenswrapper[4708]: I0227 18:49:42.852512 4708 generic.go:334] "Generic (PLEG): container finished" podID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerID="5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74" exitCode=0 Feb 27 18:49:42 crc kubenswrapper[4708]: I0227 18:49:42.852621 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jqk9z" event={"ID":"483ecb82-700c-4ece-9885-71a38c0b5c1a","Type":"ContainerDied","Data":"5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74"} Feb 27 18:49:44 crc kubenswrapper[4708]: I0227 18:49:44.149133 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jqk9z" event={"ID":"483ecb82-700c-4ece-9885-71a38c0b5c1a","Type":"ContainerStarted","Data":"3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca"} Feb 27 18:49:44 crc kubenswrapper[4708]: I0227 18:49:44.204024 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jqk9z" podStartSLOduration=2.711992185 podStartE2EDuration="6.204004015s" podCreationTimestamp="2026-02-27 18:49:38 +0000 UTC" firstStartedPulling="2026-02-27 18:49:39.802570063 +0000 UTC m=+6978.318367730" lastFinishedPulling="2026-02-27 18:49:43.294581973 +0000 UTC m=+6981.810379560" observedRunningTime="2026-02-27 18:49:44.199001603 +0000 UTC m=+6982.714799190" watchObservedRunningTime="2026-02-27 18:49:44.204004015 +0000 UTC m=+6982.719801602" Feb 27 18:49:48 crc kubenswrapper[4708]: I0227 18:49:48.925623 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:48 crc kubenswrapper[4708]: I0227 18:49:48.926405 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:49 crc kubenswrapper[4708]: I0227 18:49:49.020030 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:49 crc kubenswrapper[4708]: I0227 18:49:49.286639 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:49 crc kubenswrapper[4708]: I0227 18:49:49.361396 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jqk9z"] Feb 27 18:49:51 crc kubenswrapper[4708]: I0227 18:49:51.225463 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jqk9z" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="registry-server" containerID="cri-o://3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca" gracePeriod=2 Feb 27 18:49:51 crc kubenswrapper[4708]: I0227 18:49:51.857182 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.020714 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-catalog-content\") pod \"483ecb82-700c-4ece-9885-71a38c0b5c1a\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.021760 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nqx4\" (UniqueName: \"kubernetes.io/projected/483ecb82-700c-4ece-9885-71a38c0b5c1a-kube-api-access-9nqx4\") pod \"483ecb82-700c-4ece-9885-71a38c0b5c1a\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.022116 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-utilities\") pod \"483ecb82-700c-4ece-9885-71a38c0b5c1a\" (UID: \"483ecb82-700c-4ece-9885-71a38c0b5c1a\") " Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.022906 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-utilities" (OuterVolumeSpecName: "utilities") pod "483ecb82-700c-4ece-9885-71a38c0b5c1a" (UID: "483ecb82-700c-4ece-9885-71a38c0b5c1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.023187 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.030482 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483ecb82-700c-4ece-9885-71a38c0b5c1a-kube-api-access-9nqx4" (OuterVolumeSpecName: "kube-api-access-9nqx4") pod "483ecb82-700c-4ece-9885-71a38c0b5c1a" (UID: "483ecb82-700c-4ece-9885-71a38c0b5c1a"). InnerVolumeSpecName "kube-api-access-9nqx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.090881 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "483ecb82-700c-4ece-9885-71a38c0b5c1a" (UID: "483ecb82-700c-4ece-9885-71a38c0b5c1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.125308 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nqx4\" (UniqueName: \"kubernetes.io/projected/483ecb82-700c-4ece-9885-71a38c0b5c1a-kube-api-access-9nqx4\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.125517 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/483ecb82-700c-4ece-9885-71a38c0b5c1a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.242632 4708 generic.go:334] "Generic (PLEG): container finished" podID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerID="3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca" exitCode=0 Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.242838 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jqk9z" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.250911 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jqk9z" event={"ID":"483ecb82-700c-4ece-9885-71a38c0b5c1a","Type":"ContainerDied","Data":"3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca"} Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.251001 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jqk9z" event={"ID":"483ecb82-700c-4ece-9885-71a38c0b5c1a","Type":"ContainerDied","Data":"88f9a36e798789bdd07738e60b3498cf28b7852735d76d56f43295c47f8e76cc"} Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.251065 4708 scope.go:117] "RemoveContainer" containerID="3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.304939 4708 scope.go:117] "RemoveContainer" containerID="5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.321168 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jqk9z"] Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.342236 4708 scope.go:117] "RemoveContainer" containerID="9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.345099 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jqk9z"] Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.396651 4708 scope.go:117] "RemoveContainer" containerID="3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca" Feb 27 18:49:52 crc kubenswrapper[4708]: E0227 18:49:52.397297 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca\": container with ID starting with 3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca not found: ID does not exist" containerID="3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.397434 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca"} err="failed to get container status \"3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca\": rpc error: code = NotFound desc = could not find container \"3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca\": container with ID starting with 3585f35363e262ca648309bf4861a5dd8db7d31ee8ba91da93a24352f451a0ca not found: ID does not exist" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.397566 4708 scope.go:117] "RemoveContainer" containerID="5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74" Feb 27 18:49:52 crc kubenswrapper[4708]: E0227 18:49:52.397980 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74\": container with ID starting with 5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74 not found: ID does not exist" containerID="5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.398027 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74"} err="failed to get container status \"5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74\": rpc error: code = NotFound desc = could not find container \"5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74\": container with ID starting with 5c0512f1fdc97e45ac6ff5912fc98b709bd2c19e174f67fb46e87751b42c9e74 not found: ID does not exist" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.398051 4708 scope.go:117] "RemoveContainer" containerID="9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e" Feb 27 18:49:52 crc kubenswrapper[4708]: E0227 18:49:52.398268 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e\": container with ID starting with 9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e not found: ID does not exist" containerID="9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e" Feb 27 18:49:52 crc kubenswrapper[4708]: I0227 18:49:52.398299 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e"} err="failed to get container status \"9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e\": rpc error: code = NotFound desc = could not find container \"9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e\": container with ID starting with 9fcaed41de08db295b1ac3bc9c988b80ea30a1c77e0811c06a8bc9bc2e552e0e not found: ID does not exist" Feb 27 18:49:54 crc kubenswrapper[4708]: I0227 18:49:54.245902 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" path="/var/lib/kubelet/pods/483ecb82-700c-4ece-9885-71a38c0b5c1a/volumes" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.143073 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536970-v68lf"] Feb 27 18:50:00 crc kubenswrapper[4708]: E0227 18:50:00.144106 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="extract-utilities" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.144124 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="extract-utilities" Feb 27 18:50:00 crc kubenswrapper[4708]: E0227 18:50:00.144160 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="registry-server" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.144170 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="registry-server" Feb 27 18:50:00 crc kubenswrapper[4708]: E0227 18:50:00.144186 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="extract-content" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.144195 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="extract-content" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.144445 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="483ecb82-700c-4ece-9885-71a38c0b5c1a" containerName="registry-server" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.145444 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-v68lf" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.147438 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.147828 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.147837 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.158724 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-v68lf"] Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.299668 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlk24\" (UniqueName: \"kubernetes.io/projected/5909d02e-f00e-4bfc-be51-e58327efabbe-kube-api-access-jlk24\") pod \"auto-csr-approver-29536970-v68lf\" (UID: \"5909d02e-f00e-4bfc-be51-e58327efabbe\") " pod="openshift-infra/auto-csr-approver-29536970-v68lf" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.402101 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlk24\" (UniqueName: \"kubernetes.io/projected/5909d02e-f00e-4bfc-be51-e58327efabbe-kube-api-access-jlk24\") pod \"auto-csr-approver-29536970-v68lf\" (UID: \"5909d02e-f00e-4bfc-be51-e58327efabbe\") " pod="openshift-infra/auto-csr-approver-29536970-v68lf" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.418676 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlk24\" (UniqueName: \"kubernetes.io/projected/5909d02e-f00e-4bfc-be51-e58327efabbe-kube-api-access-jlk24\") pod \"auto-csr-approver-29536970-v68lf\" (UID: \"5909d02e-f00e-4bfc-be51-e58327efabbe\") " pod="openshift-infra/auto-csr-approver-29536970-v68lf" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.471443 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-v68lf" Feb 27 18:50:00 crc kubenswrapper[4708]: I0227 18:50:00.940651 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-v68lf"] Feb 27 18:50:01 crc kubenswrapper[4708]: I0227 18:50:01.339122 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536970-v68lf" event={"ID":"5909d02e-f00e-4bfc-be51-e58327efabbe","Type":"ContainerStarted","Data":"42a157e6cedb35632423a0b996931948110d64d4c8319e51d890771b0ad1d42b"} Feb 27 18:50:03 crc kubenswrapper[4708]: I0227 18:50:03.364943 4708 generic.go:334] "Generic (PLEG): container finished" podID="5909d02e-f00e-4bfc-be51-e58327efabbe" containerID="8e3ffb60034da3100fbd66005b1861a0b57380f96ad4df825fcf4afe793d4dd6" exitCode=0 Feb 27 18:50:03 crc kubenswrapper[4708]: I0227 18:50:03.365049 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536970-v68lf" event={"ID":"5909d02e-f00e-4bfc-be51-e58327efabbe","Type":"ContainerDied","Data":"8e3ffb60034da3100fbd66005b1861a0b57380f96ad4df825fcf4afe793d4dd6"} Feb 27 18:50:04 crc kubenswrapper[4708]: I0227 18:50:04.781223 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-v68lf" Feb 27 18:50:04 crc kubenswrapper[4708]: I0227 18:50:04.893222 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlk24\" (UniqueName: \"kubernetes.io/projected/5909d02e-f00e-4bfc-be51-e58327efabbe-kube-api-access-jlk24\") pod \"5909d02e-f00e-4bfc-be51-e58327efabbe\" (UID: \"5909d02e-f00e-4bfc-be51-e58327efabbe\") " Feb 27 18:50:04 crc kubenswrapper[4708]: I0227 18:50:04.898821 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5909d02e-f00e-4bfc-be51-e58327efabbe-kube-api-access-jlk24" (OuterVolumeSpecName: "kube-api-access-jlk24") pod "5909d02e-f00e-4bfc-be51-e58327efabbe" (UID: "5909d02e-f00e-4bfc-be51-e58327efabbe"). InnerVolumeSpecName "kube-api-access-jlk24". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:50:04 crc kubenswrapper[4708]: I0227 18:50:04.995336 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlk24\" (UniqueName: \"kubernetes.io/projected/5909d02e-f00e-4bfc-be51-e58327efabbe-kube-api-access-jlk24\") on node \"crc\" DevicePath \"\"" Feb 27 18:50:05 crc kubenswrapper[4708]: I0227 18:50:05.387352 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536970-v68lf" event={"ID":"5909d02e-f00e-4bfc-be51-e58327efabbe","Type":"ContainerDied","Data":"42a157e6cedb35632423a0b996931948110d64d4c8319e51d890771b0ad1d42b"} Feb 27 18:50:05 crc kubenswrapper[4708]: I0227 18:50:05.387777 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42a157e6cedb35632423a0b996931948110d64d4c8319e51d890771b0ad1d42b" Feb 27 18:50:05 crc kubenswrapper[4708]: I0227 18:50:05.387420 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536970-v68lf" Feb 27 18:50:05 crc kubenswrapper[4708]: I0227 18:50:05.631135 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:50:05 crc kubenswrapper[4708]: I0227 18:50:05.631200 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:50:05 crc kubenswrapper[4708]: I0227 18:50:05.873282 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536964-xlvjp"] Feb 27 18:50:05 crc kubenswrapper[4708]: I0227 18:50:05.883280 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536964-xlvjp"] Feb 27 18:50:06 crc kubenswrapper[4708]: I0227 18:50:06.248143 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6e66e4c2-c6e4-422b-8959-abdfa5e1386f" path="/var/lib/kubelet/pods/6e66e4c2-c6e4-422b-8959-abdfa5e1386f/volumes" Feb 27 18:50:07 crc kubenswrapper[4708]: I0227 18:50:07.548343 4708 scope.go:117] "RemoveContainer" containerID="039944149c3528909a0b8fbfa5e9741781130d42651d24047cbf94591e356ef6" Feb 27 18:50:35 crc kubenswrapper[4708]: I0227 18:50:35.631520 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:50:35 crc kubenswrapper[4708]: I0227 18:50:35.632170 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:50:44 crc kubenswrapper[4708]: I0227 18:50:44.786037 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/neutron-547f9bd6cc-98rqm" podUID="b9aa13d2-83ae-4a00-821d-97fc5592ec7e" containerName="neutron-api" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.554935 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pt6h7"] Feb 27 18:51:02 crc kubenswrapper[4708]: E0227 18:51:02.557391 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5909d02e-f00e-4bfc-be51-e58327efabbe" containerName="oc" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.557517 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5909d02e-f00e-4bfc-be51-e58327efabbe" containerName="oc" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.557976 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5909d02e-f00e-4bfc-be51-e58327efabbe" containerName="oc" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.562980 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.568052 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pt6h7"] Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.669796 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-catalog-content\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.670170 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zvqr\" (UniqueName: \"kubernetes.io/projected/792bc79d-f48f-4780-8943-84625d7aaddf-kube-api-access-8zvqr\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.670415 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-utilities\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.771963 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-utilities\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.772131 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-catalog-content\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.772166 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zvqr\" (UniqueName: \"kubernetes.io/projected/792bc79d-f48f-4780-8943-84625d7aaddf-kube-api-access-8zvqr\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.772598 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-utilities\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.772655 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-catalog-content\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.799781 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zvqr\" (UniqueName: \"kubernetes.io/projected/792bc79d-f48f-4780-8943-84625d7aaddf-kube-api-access-8zvqr\") pod \"certified-operators-pt6h7\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:02 crc kubenswrapper[4708]: I0227 18:51:02.902921 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:03 crc kubenswrapper[4708]: I0227 18:51:03.395955 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pt6h7"] Feb 27 18:51:04 crc kubenswrapper[4708]: I0227 18:51:04.190591 4708 generic.go:334] "Generic (PLEG): container finished" podID="792bc79d-f48f-4780-8943-84625d7aaddf" containerID="9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f" exitCode=0 Feb 27 18:51:04 crc kubenswrapper[4708]: I0227 18:51:04.190626 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt6h7" event={"ID":"792bc79d-f48f-4780-8943-84625d7aaddf","Type":"ContainerDied","Data":"9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f"} Feb 27 18:51:04 crc kubenswrapper[4708]: I0227 18:51:04.190650 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt6h7" event={"ID":"792bc79d-f48f-4780-8943-84625d7aaddf","Type":"ContainerStarted","Data":"7627a1be900dcbcc47657725bc48c66383f722066d739a18021d176b15c5a8fc"} Feb 27 18:51:04 crc kubenswrapper[4708]: I0227 18:51:04.192625 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:51:05 crc kubenswrapper[4708]: I0227 18:51:05.202320 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt6h7" event={"ID":"792bc79d-f48f-4780-8943-84625d7aaddf","Type":"ContainerStarted","Data":"0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf"} Feb 27 18:51:05 crc kubenswrapper[4708]: I0227 18:51:05.631877 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:51:05 crc kubenswrapper[4708]: I0227 18:51:05.632302 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:51:05 crc kubenswrapper[4708]: I0227 18:51:05.632369 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:51:05 crc kubenswrapper[4708]: I0227 18:51:05.633510 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"02670e62352cea09948961e7b31355646c61d95bd708fd2408e9c0930269613b"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:51:05 crc kubenswrapper[4708]: I0227 18:51:05.633645 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://02670e62352cea09948961e7b31355646c61d95bd708fd2408e9c0930269613b" gracePeriod=600 Feb 27 18:51:06 crc kubenswrapper[4708]: I0227 18:51:06.215832 4708 generic.go:334] "Generic (PLEG): container finished" podID="792bc79d-f48f-4780-8943-84625d7aaddf" containerID="0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf" exitCode=0 Feb 27 18:51:06 crc kubenswrapper[4708]: I0227 18:51:06.216204 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt6h7" event={"ID":"792bc79d-f48f-4780-8943-84625d7aaddf","Type":"ContainerDied","Data":"0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf"} Feb 27 18:51:06 crc kubenswrapper[4708]: I0227 18:51:06.221235 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="02670e62352cea09948961e7b31355646c61d95bd708fd2408e9c0930269613b" exitCode=0 Feb 27 18:51:06 crc kubenswrapper[4708]: I0227 18:51:06.221296 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"02670e62352cea09948961e7b31355646c61d95bd708fd2408e9c0930269613b"} Feb 27 18:51:06 crc kubenswrapper[4708]: I0227 18:51:06.221329 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3"} Feb 27 18:51:06 crc kubenswrapper[4708]: I0227 18:51:06.221352 4708 scope.go:117] "RemoveContainer" containerID="b0ebf44e292820a69983397981329f7daa9bd106ff566ca4be36b79ef7a8ff34" Feb 27 18:51:07 crc kubenswrapper[4708]: I0227 18:51:07.235960 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt6h7" event={"ID":"792bc79d-f48f-4780-8943-84625d7aaddf","Type":"ContainerStarted","Data":"c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640"} Feb 27 18:51:07 crc kubenswrapper[4708]: I0227 18:51:07.263736 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pt6h7" podStartSLOduration=2.816961316 podStartE2EDuration="5.263717855s" podCreationTimestamp="2026-02-27 18:51:02 +0000 UTC" firstStartedPulling="2026-02-27 18:51:04.192428765 +0000 UTC m=+7062.708226352" lastFinishedPulling="2026-02-27 18:51:06.639185304 +0000 UTC m=+7065.154982891" observedRunningTime="2026-02-27 18:51:07.257086827 +0000 UTC m=+7065.772884434" watchObservedRunningTime="2026-02-27 18:51:07.263717855 +0000 UTC m=+7065.779515442" Feb 27 18:51:12 crc kubenswrapper[4708]: I0227 18:51:12.904003 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:12 crc kubenswrapper[4708]: I0227 18:51:12.904382 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:12 crc kubenswrapper[4708]: I0227 18:51:12.995159 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:13 crc kubenswrapper[4708]: I0227 18:51:13.388593 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:14 crc kubenswrapper[4708]: I0227 18:51:14.289533 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pt6h7"] Feb 27 18:51:15 crc kubenswrapper[4708]: I0227 18:51:15.326139 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pt6h7" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="registry-server" containerID="cri-o://c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640" gracePeriod=2 Feb 27 18:51:15 crc kubenswrapper[4708]: I0227 18:51:15.919547 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.085969 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zvqr\" (UniqueName: \"kubernetes.io/projected/792bc79d-f48f-4780-8943-84625d7aaddf-kube-api-access-8zvqr\") pod \"792bc79d-f48f-4780-8943-84625d7aaddf\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.086034 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-catalog-content\") pod \"792bc79d-f48f-4780-8943-84625d7aaddf\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.086115 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-utilities\") pod \"792bc79d-f48f-4780-8943-84625d7aaddf\" (UID: \"792bc79d-f48f-4780-8943-84625d7aaddf\") " Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.087091 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-utilities" (OuterVolumeSpecName: "utilities") pod "792bc79d-f48f-4780-8943-84625d7aaddf" (UID: "792bc79d-f48f-4780-8943-84625d7aaddf"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.103121 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/792bc79d-f48f-4780-8943-84625d7aaddf-kube-api-access-8zvqr" (OuterVolumeSpecName: "kube-api-access-8zvqr") pod "792bc79d-f48f-4780-8943-84625d7aaddf" (UID: "792bc79d-f48f-4780-8943-84625d7aaddf"). InnerVolumeSpecName "kube-api-access-8zvqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.140000 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "792bc79d-f48f-4780-8943-84625d7aaddf" (UID: "792bc79d-f48f-4780-8943-84625d7aaddf"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.188345 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zvqr\" (UniqueName: \"kubernetes.io/projected/792bc79d-f48f-4780-8943-84625d7aaddf-kube-api-access-8zvqr\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.188578 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.188646 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/792bc79d-f48f-4780-8943-84625d7aaddf-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.338962 4708 generic.go:334] "Generic (PLEG): container finished" podID="792bc79d-f48f-4780-8943-84625d7aaddf" containerID="c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640" exitCode=0 Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.339025 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt6h7" event={"ID":"792bc79d-f48f-4780-8943-84625d7aaddf","Type":"ContainerDied","Data":"c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640"} Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.339071 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pt6h7" event={"ID":"792bc79d-f48f-4780-8943-84625d7aaddf","Type":"ContainerDied","Data":"7627a1be900dcbcc47657725bc48c66383f722066d739a18021d176b15c5a8fc"} Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.339086 4708 scope.go:117] "RemoveContainer" containerID="c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.340119 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pt6h7" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.368784 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pt6h7"] Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.371169 4708 scope.go:117] "RemoveContainer" containerID="0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.384556 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pt6h7"] Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.396437 4708 scope.go:117] "RemoveContainer" containerID="9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.468922 4708 scope.go:117] "RemoveContainer" containerID="c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640" Feb 27 18:51:16 crc kubenswrapper[4708]: E0227 18:51:16.469382 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640\": container with ID starting with c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640 not found: ID does not exist" containerID="c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.469422 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640"} err="failed to get container status \"c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640\": rpc error: code = NotFound desc = could not find container \"c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640\": container with ID starting with c40eeaa32a5e0c52b8ea586d4bae5a31eeef93d5af777836655e26320404d640 not found: ID does not exist" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.469451 4708 scope.go:117] "RemoveContainer" containerID="0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf" Feb 27 18:51:16 crc kubenswrapper[4708]: E0227 18:51:16.469930 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf\": container with ID starting with 0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf not found: ID does not exist" containerID="0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.469972 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf"} err="failed to get container status \"0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf\": rpc error: code = NotFound desc = could not find container \"0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf\": container with ID starting with 0a2739fa116d796f2d1156683bcdcb7a27118b0aa901cdc56bbed420f073c7cf not found: ID does not exist" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.470015 4708 scope.go:117] "RemoveContainer" containerID="9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f" Feb 27 18:51:16 crc kubenswrapper[4708]: E0227 18:51:16.470254 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f\": container with ID starting with 9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f not found: ID does not exist" containerID="9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f" Feb 27 18:51:16 crc kubenswrapper[4708]: I0227 18:51:16.470282 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f"} err="failed to get container status \"9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f\": rpc error: code = NotFound desc = could not find container \"9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f\": container with ID starting with 9bc5ea8b2cbca142cef6ab45cfe34b36ee246154d301ea0420001f50516ed30f not found: ID does not exist" Feb 27 18:51:18 crc kubenswrapper[4708]: I0227 18:51:18.248239 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" path="/var/lib/kubelet/pods/792bc79d-f48f-4780-8943-84625d7aaddf/volumes" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.166110 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536972-v4hn4"] Feb 27 18:52:00 crc kubenswrapper[4708]: E0227 18:52:00.167377 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="registry-server" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.167398 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="registry-server" Feb 27 18:52:00 crc kubenswrapper[4708]: E0227 18:52:00.167455 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="extract-utilities" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.167467 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="extract-utilities" Feb 27 18:52:00 crc kubenswrapper[4708]: E0227 18:52:00.167490 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="extract-content" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.167500 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="extract-content" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.167876 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="792bc79d-f48f-4780-8943-84625d7aaddf" containerName="registry-server" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.169096 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-v4hn4" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.172704 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.173004 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.173218 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.183980 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-v4hn4"] Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.267040 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mvpp\" (UniqueName: \"kubernetes.io/projected/d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff-kube-api-access-4mvpp\") pod \"auto-csr-approver-29536972-v4hn4\" (UID: \"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff\") " pod="openshift-infra/auto-csr-approver-29536972-v4hn4" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.369721 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mvpp\" (UniqueName: \"kubernetes.io/projected/d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff-kube-api-access-4mvpp\") pod \"auto-csr-approver-29536972-v4hn4\" (UID: \"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff\") " pod="openshift-infra/auto-csr-approver-29536972-v4hn4" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.393783 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mvpp\" (UniqueName: \"kubernetes.io/projected/d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff-kube-api-access-4mvpp\") pod \"auto-csr-approver-29536972-v4hn4\" (UID: \"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff\") " pod="openshift-infra/auto-csr-approver-29536972-v4hn4" Feb 27 18:52:00 crc kubenswrapper[4708]: I0227 18:52:00.501355 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-v4hn4" Feb 27 18:52:01 crc kubenswrapper[4708]: I0227 18:52:01.015339 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-v4hn4"] Feb 27 18:52:01 crc kubenswrapper[4708]: I0227 18:52:01.865440 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536972-v4hn4" event={"ID":"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff","Type":"ContainerStarted","Data":"5b352e5602f267ffbd36faa84800b237ba3fdd5b1baf5e6bf26f0ef6e669f937"} Feb 27 18:52:02 crc kubenswrapper[4708]: I0227 18:52:02.875594 4708 generic.go:334] "Generic (PLEG): container finished" podID="d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff" containerID="06e94ba88ff441236b678cbf333e7d60bb20eded19cbbd00d0c6a0aa45fa9131" exitCode=0 Feb 27 18:52:02 crc kubenswrapper[4708]: I0227 18:52:02.875668 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536972-v4hn4" event={"ID":"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff","Type":"ContainerDied","Data":"06e94ba88ff441236b678cbf333e7d60bb20eded19cbbd00d0c6a0aa45fa9131"} Feb 27 18:52:04 crc kubenswrapper[4708]: I0227 18:52:04.343315 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-v4hn4" Feb 27 18:52:04 crc kubenswrapper[4708]: I0227 18:52:04.456747 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mvpp\" (UniqueName: \"kubernetes.io/projected/d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff-kube-api-access-4mvpp\") pod \"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff\" (UID: \"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff\") " Feb 27 18:52:04 crc kubenswrapper[4708]: I0227 18:52:04.463774 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff-kube-api-access-4mvpp" (OuterVolumeSpecName: "kube-api-access-4mvpp") pod "d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff" (UID: "d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff"). InnerVolumeSpecName "kube-api-access-4mvpp". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:52:04 crc kubenswrapper[4708]: I0227 18:52:04.560335 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mvpp\" (UniqueName: \"kubernetes.io/projected/d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff-kube-api-access-4mvpp\") on node \"crc\" DevicePath \"\"" Feb 27 18:52:04 crc kubenswrapper[4708]: I0227 18:52:04.901671 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536972-v4hn4" event={"ID":"d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff","Type":"ContainerDied","Data":"5b352e5602f267ffbd36faa84800b237ba3fdd5b1baf5e6bf26f0ef6e669f937"} Feb 27 18:52:04 crc kubenswrapper[4708]: I0227 18:52:04.901730 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b352e5602f267ffbd36faa84800b237ba3fdd5b1baf5e6bf26f0ef6e669f937" Feb 27 18:52:04 crc kubenswrapper[4708]: I0227 18:52:04.901750 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536972-v4hn4" Feb 27 18:52:05 crc kubenswrapper[4708]: I0227 18:52:05.425468 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536966-k2vpz"] Feb 27 18:52:05 crc kubenswrapper[4708]: I0227 18:52:05.436915 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536966-k2vpz"] Feb 27 18:52:06 crc kubenswrapper[4708]: I0227 18:52:06.243738 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cf726d56-35b5-4c1d-be1c-ded0e4d30ca2" path="/var/lib/kubelet/pods/cf726d56-35b5-4c1d-be1c-ded0e4d30ca2/volumes" Feb 27 18:52:07 crc kubenswrapper[4708]: I0227 18:52:07.683051 4708 scope.go:117] "RemoveContainer" containerID="8165d8fdafef9ea437773c43d21d1ef4eeb53e669e6666200950ba58db4f630c" Feb 27 18:53:35 crc kubenswrapper[4708]: I0227 18:53:35.631610 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:53:35 crc kubenswrapper[4708]: I0227 18:53:35.632242 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.155482 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536974-4vqhn"] Feb 27 18:54:00 crc kubenswrapper[4708]: E0227 18:54:00.156823 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff" containerName="oc" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.156874 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff" containerName="oc" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.157284 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff" containerName="oc" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.158473 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.165331 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-4vqhn"] Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.165456 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.165614 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.165778 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.266351 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v6xg\" (UniqueName: \"kubernetes.io/projected/13fe94df-976c-47e0-a7a4-697c38d4eac9-kube-api-access-4v6xg\") pod \"auto-csr-approver-29536974-4vqhn\" (UID: \"13fe94df-976c-47e0-a7a4-697c38d4eac9\") " pod="openshift-infra/auto-csr-approver-29536974-4vqhn" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.370971 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4v6xg\" (UniqueName: \"kubernetes.io/projected/13fe94df-976c-47e0-a7a4-697c38d4eac9-kube-api-access-4v6xg\") pod \"auto-csr-approver-29536974-4vqhn\" (UID: \"13fe94df-976c-47e0-a7a4-697c38d4eac9\") " pod="openshift-infra/auto-csr-approver-29536974-4vqhn" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.405646 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4v6xg\" (UniqueName: \"kubernetes.io/projected/13fe94df-976c-47e0-a7a4-697c38d4eac9-kube-api-access-4v6xg\") pod \"auto-csr-approver-29536974-4vqhn\" (UID: \"13fe94df-976c-47e0-a7a4-697c38d4eac9\") " pod="openshift-infra/auto-csr-approver-29536974-4vqhn" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.489593 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" Feb 27 18:54:00 crc kubenswrapper[4708]: I0227 18:54:00.920409 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-4vqhn"] Feb 27 18:54:00 crc kubenswrapper[4708]: W0227 18:54:00.926701 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13fe94df_976c_47e0_a7a4_697c38d4eac9.slice/crio-e6bf158479b05a18001cc7d501cba7fd8a797b4343021b2d7bafecbc0f5615ba WatchSource:0}: Error finding container e6bf158479b05a18001cc7d501cba7fd8a797b4343021b2d7bafecbc0f5615ba: Status 404 returned error can't find the container with id e6bf158479b05a18001cc7d501cba7fd8a797b4343021b2d7bafecbc0f5615ba Feb 27 18:54:01 crc kubenswrapper[4708]: I0227 18:54:01.240088 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" event={"ID":"13fe94df-976c-47e0-a7a4-697c38d4eac9","Type":"ContainerStarted","Data":"e6bf158479b05a18001cc7d501cba7fd8a797b4343021b2d7bafecbc0f5615ba"} Feb 27 18:54:02 crc kubenswrapper[4708]: I0227 18:54:02.267119 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" event={"ID":"13fe94df-976c-47e0-a7a4-697c38d4eac9","Type":"ContainerStarted","Data":"8f982f0eb50667723a8f9d5355bf8b2b63e19f85df18ccbd4d63a8e4b7d9a2ac"} Feb 27 18:54:02 crc kubenswrapper[4708]: I0227 18:54:02.285661 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" podStartSLOduration=1.422497655 podStartE2EDuration="2.285636439s" podCreationTimestamp="2026-02-27 18:54:00 +0000 UTC" firstStartedPulling="2026-02-27 18:54:00.929050109 +0000 UTC m=+7239.444847696" lastFinishedPulling="2026-02-27 18:54:01.792188893 +0000 UTC m=+7240.307986480" observedRunningTime="2026-02-27 18:54:02.284885387 +0000 UTC m=+7240.800682974" watchObservedRunningTime="2026-02-27 18:54:02.285636439 +0000 UTC m=+7240.801434036" Feb 27 18:54:03 crc kubenswrapper[4708]: I0227 18:54:03.282451 4708 generic.go:334] "Generic (PLEG): container finished" podID="13fe94df-976c-47e0-a7a4-697c38d4eac9" containerID="8f982f0eb50667723a8f9d5355bf8b2b63e19f85df18ccbd4d63a8e4b7d9a2ac" exitCode=0 Feb 27 18:54:03 crc kubenswrapper[4708]: I0227 18:54:03.282677 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" event={"ID":"13fe94df-976c-47e0-a7a4-697c38d4eac9","Type":"ContainerDied","Data":"8f982f0eb50667723a8f9d5355bf8b2b63e19f85df18ccbd4d63a8e4b7d9a2ac"} Feb 27 18:54:04 crc kubenswrapper[4708]: I0227 18:54:04.800326 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" Feb 27 18:54:04 crc kubenswrapper[4708]: I0227 18:54:04.880047 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v6xg\" (UniqueName: \"kubernetes.io/projected/13fe94df-976c-47e0-a7a4-697c38d4eac9-kube-api-access-4v6xg\") pod \"13fe94df-976c-47e0-a7a4-697c38d4eac9\" (UID: \"13fe94df-976c-47e0-a7a4-697c38d4eac9\") " Feb 27 18:54:04 crc kubenswrapper[4708]: I0227 18:54:04.885395 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13fe94df-976c-47e0-a7a4-697c38d4eac9-kube-api-access-4v6xg" (OuterVolumeSpecName: "kube-api-access-4v6xg") pod "13fe94df-976c-47e0-a7a4-697c38d4eac9" (UID: "13fe94df-976c-47e0-a7a4-697c38d4eac9"). InnerVolumeSpecName "kube-api-access-4v6xg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:54:04 crc kubenswrapper[4708]: I0227 18:54:04.982026 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4v6xg\" (UniqueName: \"kubernetes.io/projected/13fe94df-976c-47e0-a7a4-697c38d4eac9-kube-api-access-4v6xg\") on node \"crc\" DevicePath \"\"" Feb 27 18:54:05 crc kubenswrapper[4708]: I0227 18:54:05.306867 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" event={"ID":"13fe94df-976c-47e0-a7a4-697c38d4eac9","Type":"ContainerDied","Data":"e6bf158479b05a18001cc7d501cba7fd8a797b4343021b2d7bafecbc0f5615ba"} Feb 27 18:54:05 crc kubenswrapper[4708]: I0227 18:54:05.307194 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e6bf158479b05a18001cc7d501cba7fd8a797b4343021b2d7bafecbc0f5615ba" Feb 27 18:54:05 crc kubenswrapper[4708]: I0227 18:54:05.306935 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536974-4vqhn" Feb 27 18:54:05 crc kubenswrapper[4708]: I0227 18:54:05.347760 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-cx75j"] Feb 27 18:54:05 crc kubenswrapper[4708]: I0227 18:54:05.363169 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536968-cx75j"] Feb 27 18:54:05 crc kubenswrapper[4708]: I0227 18:54:05.631445 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:54:05 crc kubenswrapper[4708]: I0227 18:54:05.631510 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:54:06 crc kubenswrapper[4708]: I0227 18:54:06.239938 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6636544-dc9f-4303-9b0d-d11f4ad26518" path="/var/lib/kubelet/pods/c6636544-dc9f-4303-9b0d-d11f4ad26518/volumes" Feb 27 18:54:07 crc kubenswrapper[4708]: I0227 18:54:07.824094 4708 scope.go:117] "RemoveContainer" containerID="02653b3f26948da5318eff30950ac38b2a9b8db79d9c226182c8ea5e46ce0eca" Feb 27 18:54:35 crc kubenswrapper[4708]: I0227 18:54:35.631457 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 18:54:35 crc kubenswrapper[4708]: I0227 18:54:35.632093 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 18:54:35 crc kubenswrapper[4708]: I0227 18:54:35.632140 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 18:54:35 crc kubenswrapper[4708]: I0227 18:54:35.633054 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 18:54:35 crc kubenswrapper[4708]: I0227 18:54:35.633133 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" gracePeriod=600 Feb 27 18:54:35 crc kubenswrapper[4708]: E0227 18:54:35.767869 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:54:36 crc kubenswrapper[4708]: I0227 18:54:36.677612 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" exitCode=0 Feb 27 18:54:36 crc kubenswrapper[4708]: I0227 18:54:36.678615 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3"} Feb 27 18:54:36 crc kubenswrapper[4708]: I0227 18:54:36.678699 4708 scope.go:117] "RemoveContainer" containerID="02670e62352cea09948961e7b31355646c61d95bd708fd2408e9c0930269613b" Feb 27 18:54:36 crc kubenswrapper[4708]: I0227 18:54:36.679361 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:54:36 crc kubenswrapper[4708]: E0227 18:54:36.679610 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:54:48 crc kubenswrapper[4708]: I0227 18:54:48.229356 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:54:48 crc kubenswrapper[4708]: E0227 18:54:48.230513 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:54:59 crc kubenswrapper[4708]: I0227 18:54:59.229331 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:54:59 crc kubenswrapper[4708]: E0227 18:54:59.230270 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:55:11 crc kubenswrapper[4708]: I0227 18:55:11.229753 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:55:11 crc kubenswrapper[4708]: E0227 18:55:11.231169 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:55:25 crc kubenswrapper[4708]: I0227 18:55:25.229664 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:55:25 crc kubenswrapper[4708]: E0227 18:55:25.230809 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:55:40 crc kubenswrapper[4708]: I0227 18:55:40.229168 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:55:40 crc kubenswrapper[4708]: E0227 18:55:40.229811 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.510230 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-65nxq"] Feb 27 18:55:43 crc kubenswrapper[4708]: E0227 18:55:43.511063 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13fe94df-976c-47e0-a7a4-697c38d4eac9" containerName="oc" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.511078 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="13fe94df-976c-47e0-a7a4-697c38d4eac9" containerName="oc" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.511287 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="13fe94df-976c-47e0-a7a4-697c38d4eac9" containerName="oc" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.512712 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.535995 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-65nxq"] Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.699781 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgrsb\" (UniqueName: \"kubernetes.io/projected/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-kube-api-access-cgrsb\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.699831 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-utilities\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.700190 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-catalog-content\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.802925 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-catalog-content\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.803108 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cgrsb\" (UniqueName: \"kubernetes.io/projected/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-kube-api-access-cgrsb\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.803474 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-catalog-content\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.804136 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-utilities\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.804195 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-utilities\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:43 crc kubenswrapper[4708]: I0227 18:55:43.842070 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cgrsb\" (UniqueName: \"kubernetes.io/projected/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-kube-api-access-cgrsb\") pod \"redhat-marketplace-65nxq\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:44 crc kubenswrapper[4708]: I0227 18:55:44.139176 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:44 crc kubenswrapper[4708]: I0227 18:55:44.822723 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-65nxq"] Feb 27 18:55:45 crc kubenswrapper[4708]: I0227 18:55:45.528782 4708 generic.go:334] "Generic (PLEG): container finished" podID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerID="a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99" exitCode=0 Feb 27 18:55:45 crc kubenswrapper[4708]: I0227 18:55:45.528868 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65nxq" event={"ID":"0e936fc4-aad9-4747-b1a1-bb9faa5aca40","Type":"ContainerDied","Data":"a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99"} Feb 27 18:55:45 crc kubenswrapper[4708]: I0227 18:55:45.529599 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65nxq" event={"ID":"0e936fc4-aad9-4747-b1a1-bb9faa5aca40","Type":"ContainerStarted","Data":"9b1b8acf162041a91e8bd41707699d0204f0c53715ac7ebd5b1518616d37cd6e"} Feb 27 18:55:47 crc kubenswrapper[4708]: I0227 18:55:47.583747 4708 generic.go:334] "Generic (PLEG): container finished" podID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerID="27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10" exitCode=0 Feb 27 18:55:47 crc kubenswrapper[4708]: I0227 18:55:47.584260 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65nxq" event={"ID":"0e936fc4-aad9-4747-b1a1-bb9faa5aca40","Type":"ContainerDied","Data":"27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10"} Feb 27 18:55:48 crc kubenswrapper[4708]: I0227 18:55:48.600179 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65nxq" event={"ID":"0e936fc4-aad9-4747-b1a1-bb9faa5aca40","Type":"ContainerStarted","Data":"c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c"} Feb 27 18:55:48 crc kubenswrapper[4708]: I0227 18:55:48.631911 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-65nxq" podStartSLOduration=3.123437251 podStartE2EDuration="5.631891814s" podCreationTimestamp="2026-02-27 18:55:43 +0000 UTC" firstStartedPulling="2026-02-27 18:55:45.535250048 +0000 UTC m=+7344.051047655" lastFinishedPulling="2026-02-27 18:55:48.043704611 +0000 UTC m=+7346.559502218" observedRunningTime="2026-02-27 18:55:48.624939778 +0000 UTC m=+7347.140737395" watchObservedRunningTime="2026-02-27 18:55:48.631891814 +0000 UTC m=+7347.147689401" Feb 27 18:55:52 crc kubenswrapper[4708]: I0227 18:55:52.237183 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:55:52 crc kubenswrapper[4708]: E0227 18:55:52.237788 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:55:54 crc kubenswrapper[4708]: I0227 18:55:54.140268 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:54 crc kubenswrapper[4708]: I0227 18:55:54.140830 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:54 crc kubenswrapper[4708]: I0227 18:55:54.210232 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:54 crc kubenswrapper[4708]: I0227 18:55:54.739105 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:54 crc kubenswrapper[4708]: I0227 18:55:54.822483 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-65nxq"] Feb 27 18:55:56 crc kubenswrapper[4708]: I0227 18:55:56.690537 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-65nxq" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="registry-server" containerID="cri-o://c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c" gracePeriod=2 Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.242437 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.400223 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-utilities\") pod \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.400478 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cgrsb\" (UniqueName: \"kubernetes.io/projected/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-kube-api-access-cgrsb\") pod \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.400535 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-catalog-content\") pod \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\" (UID: \"0e936fc4-aad9-4747-b1a1-bb9faa5aca40\") " Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.401274 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-utilities" (OuterVolumeSpecName: "utilities") pod "0e936fc4-aad9-4747-b1a1-bb9faa5aca40" (UID: "0e936fc4-aad9-4747-b1a1-bb9faa5aca40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.409823 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-kube-api-access-cgrsb" (OuterVolumeSpecName: "kube-api-access-cgrsb") pod "0e936fc4-aad9-4747-b1a1-bb9faa5aca40" (UID: "0e936fc4-aad9-4747-b1a1-bb9faa5aca40"). InnerVolumeSpecName "kube-api-access-cgrsb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.427399 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e936fc4-aad9-4747-b1a1-bb9faa5aca40" (UID: "0e936fc4-aad9-4747-b1a1-bb9faa5aca40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.504477 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cgrsb\" (UniqueName: \"kubernetes.io/projected/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-kube-api-access-cgrsb\") on node \"crc\" DevicePath \"\"" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.504870 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.504897 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e936fc4-aad9-4747-b1a1-bb9faa5aca40-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.704991 4708 generic.go:334] "Generic (PLEG): container finished" podID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerID="c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c" exitCode=0 Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.705057 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65nxq" event={"ID":"0e936fc4-aad9-4747-b1a1-bb9faa5aca40","Type":"ContainerDied","Data":"c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c"} Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.705093 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-65nxq" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.705117 4708 scope.go:117] "RemoveContainer" containerID="c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.705102 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-65nxq" event={"ID":"0e936fc4-aad9-4747-b1a1-bb9faa5aca40","Type":"ContainerDied","Data":"9b1b8acf162041a91e8bd41707699d0204f0c53715ac7ebd5b1518616d37cd6e"} Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.738551 4708 scope.go:117] "RemoveContainer" containerID="27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.758918 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-65nxq"] Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.772726 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-65nxq"] Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.790928 4708 scope.go:117] "RemoveContainer" containerID="a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.832492 4708 scope.go:117] "RemoveContainer" containerID="c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c" Feb 27 18:55:57 crc kubenswrapper[4708]: E0227 18:55:57.833146 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c\": container with ID starting with c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c not found: ID does not exist" containerID="c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.833213 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c"} err="failed to get container status \"c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c\": rpc error: code = NotFound desc = could not find container \"c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c\": container with ID starting with c8dd2480d854d7a5e8e807c8bca9f3b0d5e9cf7a3fbc019b1d8eb85266e3300c not found: ID does not exist" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.833254 4708 scope.go:117] "RemoveContainer" containerID="27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10" Feb 27 18:55:57 crc kubenswrapper[4708]: E0227 18:55:57.833739 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10\": container with ID starting with 27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10 not found: ID does not exist" containerID="27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.833776 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10"} err="failed to get container status \"27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10\": rpc error: code = NotFound desc = could not find container \"27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10\": container with ID starting with 27b3d3ed56a56ae2a0f285600689e3b87f5d8f20306f300e2982b968c80fbe10 not found: ID does not exist" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.833805 4708 scope.go:117] "RemoveContainer" containerID="a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99" Feb 27 18:55:57 crc kubenswrapper[4708]: E0227 18:55:57.834616 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99\": container with ID starting with a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99 not found: ID does not exist" containerID="a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99" Feb 27 18:55:57 crc kubenswrapper[4708]: I0227 18:55:57.834657 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99"} err="failed to get container status \"a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99\": rpc error: code = NotFound desc = could not find container \"a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99\": container with ID starting with a6650e975af79b725f5e3b4e09128af769cf0fd4f50e228076aef2184c32dd99 not found: ID does not exist" Feb 27 18:55:58 crc kubenswrapper[4708]: I0227 18:55:58.240944 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" path="/var/lib/kubelet/pods/0e936fc4-aad9-4747-b1a1-bb9faa5aca40/volumes" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.170582 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536976-drwj9"] Feb 27 18:56:00 crc kubenswrapper[4708]: E0227 18:56:00.171316 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="registry-server" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.171332 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="registry-server" Feb 27 18:56:00 crc kubenswrapper[4708]: E0227 18:56:00.171374 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="extract-utilities" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.171386 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="extract-utilities" Feb 27 18:56:00 crc kubenswrapper[4708]: E0227 18:56:00.171421 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="extract-content" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.171429 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="extract-content" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.171692 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e936fc4-aad9-4747-b1a1-bb9faa5aca40" containerName="registry-server" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.172742 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-drwj9" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.175099 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.176937 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.177146 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.186223 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-drwj9"] Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.264931 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dfmdc\" (UniqueName: \"kubernetes.io/projected/9a608316-a793-460a-b4ac-e7cdba1275ed-kube-api-access-dfmdc\") pod \"auto-csr-approver-29536976-drwj9\" (UID: \"9a608316-a793-460a-b4ac-e7cdba1275ed\") " pod="openshift-infra/auto-csr-approver-29536976-drwj9" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.366624 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dfmdc\" (UniqueName: \"kubernetes.io/projected/9a608316-a793-460a-b4ac-e7cdba1275ed-kube-api-access-dfmdc\") pod \"auto-csr-approver-29536976-drwj9\" (UID: \"9a608316-a793-460a-b4ac-e7cdba1275ed\") " pod="openshift-infra/auto-csr-approver-29536976-drwj9" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.392614 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dfmdc\" (UniqueName: \"kubernetes.io/projected/9a608316-a793-460a-b4ac-e7cdba1275ed-kube-api-access-dfmdc\") pod \"auto-csr-approver-29536976-drwj9\" (UID: \"9a608316-a793-460a-b4ac-e7cdba1275ed\") " pod="openshift-infra/auto-csr-approver-29536976-drwj9" Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.494771 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-drwj9" Feb 27 18:56:00 crc kubenswrapper[4708]: W0227 18:56:00.946555 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a608316_a793_460a_b4ac_e7cdba1275ed.slice/crio-38b54b41fa66acb5cd9c845ae9e4e193f809d511cd9136c256ec82ec7bb2f32d WatchSource:0}: Error finding container 38b54b41fa66acb5cd9c845ae9e4e193f809d511cd9136c256ec82ec7bb2f32d: Status 404 returned error can't find the container with id 38b54b41fa66acb5cd9c845ae9e4e193f809d511cd9136c256ec82ec7bb2f32d Feb 27 18:56:00 crc kubenswrapper[4708]: I0227 18:56:00.956516 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-drwj9"] Feb 27 18:56:01 crc kubenswrapper[4708]: I0227 18:56:01.754245 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536976-drwj9" event={"ID":"9a608316-a793-460a-b4ac-e7cdba1275ed","Type":"ContainerStarted","Data":"38b54b41fa66acb5cd9c845ae9e4e193f809d511cd9136c256ec82ec7bb2f32d"} Feb 27 18:56:02 crc kubenswrapper[4708]: I0227 18:56:02.763680 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536976-drwj9" event={"ID":"9a608316-a793-460a-b4ac-e7cdba1275ed","Type":"ContainerStarted","Data":"82fd2f55f42a6d85099d2f6679af3396fec3a3108b5f01068c9404ece3021f74"} Feb 27 18:56:02 crc kubenswrapper[4708]: E0227 18:56:02.920573 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a608316_a793_460a_b4ac_e7cdba1275ed.slice/crio-conmon-82fd2f55f42a6d85099d2f6679af3396fec3a3108b5f01068c9404ece3021f74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9a608316_a793_460a_b4ac_e7cdba1275ed.slice/crio-82fd2f55f42a6d85099d2f6679af3396fec3a3108b5f01068c9404ece3021f74.scope\": RecentStats: unable to find data in memory cache]" Feb 27 18:56:03 crc kubenswrapper[4708]: I0227 18:56:03.777223 4708 generic.go:334] "Generic (PLEG): container finished" podID="9a608316-a793-460a-b4ac-e7cdba1275ed" containerID="82fd2f55f42a6d85099d2f6679af3396fec3a3108b5f01068c9404ece3021f74" exitCode=0 Feb 27 18:56:03 crc kubenswrapper[4708]: I0227 18:56:03.777285 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536976-drwj9" event={"ID":"9a608316-a793-460a-b4ac-e7cdba1275ed","Type":"ContainerDied","Data":"82fd2f55f42a6d85099d2f6679af3396fec3a3108b5f01068c9404ece3021f74"} Feb 27 18:56:05 crc kubenswrapper[4708]: I0227 18:56:05.300379 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-drwj9" Feb 27 18:56:05 crc kubenswrapper[4708]: I0227 18:56:05.372101 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dfmdc\" (UniqueName: \"kubernetes.io/projected/9a608316-a793-460a-b4ac-e7cdba1275ed-kube-api-access-dfmdc\") pod \"9a608316-a793-460a-b4ac-e7cdba1275ed\" (UID: \"9a608316-a793-460a-b4ac-e7cdba1275ed\") " Feb 27 18:56:05 crc kubenswrapper[4708]: I0227 18:56:05.376550 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a608316-a793-460a-b4ac-e7cdba1275ed-kube-api-access-dfmdc" (OuterVolumeSpecName: "kube-api-access-dfmdc") pod "9a608316-a793-460a-b4ac-e7cdba1275ed" (UID: "9a608316-a793-460a-b4ac-e7cdba1275ed"). InnerVolumeSpecName "kube-api-access-dfmdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:56:05 crc kubenswrapper[4708]: I0227 18:56:05.474790 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dfmdc\" (UniqueName: \"kubernetes.io/projected/9a608316-a793-460a-b4ac-e7cdba1275ed-kube-api-access-dfmdc\") on node \"crc\" DevicePath \"\"" Feb 27 18:56:05 crc kubenswrapper[4708]: I0227 18:56:05.803439 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536976-drwj9" event={"ID":"9a608316-a793-460a-b4ac-e7cdba1275ed","Type":"ContainerDied","Data":"38b54b41fa66acb5cd9c845ae9e4e193f809d511cd9136c256ec82ec7bb2f32d"} Feb 27 18:56:05 crc kubenswrapper[4708]: I0227 18:56:05.803495 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38b54b41fa66acb5cd9c845ae9e4e193f809d511cd9136c256ec82ec7bb2f32d" Feb 27 18:56:05 crc kubenswrapper[4708]: I0227 18:56:05.803520 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536976-drwj9" Feb 27 18:56:06 crc kubenswrapper[4708]: I0227 18:56:06.394246 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-v68lf"] Feb 27 18:56:06 crc kubenswrapper[4708]: I0227 18:56:06.403522 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536970-v68lf"] Feb 27 18:56:07 crc kubenswrapper[4708]: I0227 18:56:07.228544 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:56:07 crc kubenswrapper[4708]: E0227 18:56:07.228896 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:56:08 crc kubenswrapper[4708]: I0227 18:56:08.240220 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5909d02e-f00e-4bfc-be51-e58327efabbe" path="/var/lib/kubelet/pods/5909d02e-f00e-4bfc-be51-e58327efabbe/volumes" Feb 27 18:56:22 crc kubenswrapper[4708]: I0227 18:56:22.235620 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:56:22 crc kubenswrapper[4708]: E0227 18:56:22.236420 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:56:33 crc kubenswrapper[4708]: I0227 18:56:33.228790 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:56:33 crc kubenswrapper[4708]: E0227 18:56:33.229833 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.701564 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hw6bq"] Feb 27 18:56:36 crc kubenswrapper[4708]: E0227 18:56:36.704023 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a608316-a793-460a-b4ac-e7cdba1275ed" containerName="oc" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.704155 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a608316-a793-460a-b4ac-e7cdba1275ed" containerName="oc" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.704517 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a608316-a793-460a-b4ac-e7cdba1275ed" containerName="oc" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.706739 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.749131 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hw6bq"] Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.785616 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-utilities\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.785665 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-catalog-content\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.785926 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc6rm\" (UniqueName: \"kubernetes.io/projected/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-kube-api-access-pc6rm\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.887747 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-utilities\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.887801 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-catalog-content\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.887886 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pc6rm\" (UniqueName: \"kubernetes.io/projected/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-kube-api-access-pc6rm\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.888329 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-utilities\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.888379 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-catalog-content\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:36 crc kubenswrapper[4708]: I0227 18:56:36.904465 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pc6rm\" (UniqueName: \"kubernetes.io/projected/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-kube-api-access-pc6rm\") pod \"redhat-operators-hw6bq\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:37 crc kubenswrapper[4708]: I0227 18:56:37.045131 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:37 crc kubenswrapper[4708]: I0227 18:56:37.498827 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hw6bq"] Feb 27 18:56:38 crc kubenswrapper[4708]: I0227 18:56:38.175273 4708 generic.go:334] "Generic (PLEG): container finished" podID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerID="70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd" exitCode=0 Feb 27 18:56:38 crc kubenswrapper[4708]: I0227 18:56:38.175326 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw6bq" event={"ID":"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c","Type":"ContainerDied","Data":"70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd"} Feb 27 18:56:38 crc kubenswrapper[4708]: I0227 18:56:38.175570 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw6bq" event={"ID":"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c","Type":"ContainerStarted","Data":"68a6473cd9fc1c4e2f1e722d125d822323d8c757b0f50b6e1a01c69913a9b529"} Feb 27 18:56:38 crc kubenswrapper[4708]: I0227 18:56:38.178920 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 18:56:40 crc kubenswrapper[4708]: I0227 18:56:40.216232 4708 generic.go:334] "Generic (PLEG): container finished" podID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerID="45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e" exitCode=0 Feb 27 18:56:40 crc kubenswrapper[4708]: I0227 18:56:40.216482 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw6bq" event={"ID":"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c","Type":"ContainerDied","Data":"45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e"} Feb 27 18:56:41 crc kubenswrapper[4708]: I0227 18:56:41.232948 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw6bq" event={"ID":"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c","Type":"ContainerStarted","Data":"449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32"} Feb 27 18:56:41 crc kubenswrapper[4708]: I0227 18:56:41.273670 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hw6bq" podStartSLOduration=2.774617142 podStartE2EDuration="5.27364046s" podCreationTimestamp="2026-02-27 18:56:36 +0000 UTC" firstStartedPulling="2026-02-27 18:56:38.17866002 +0000 UTC m=+7396.694457607" lastFinishedPulling="2026-02-27 18:56:40.677683298 +0000 UTC m=+7399.193480925" observedRunningTime="2026-02-27 18:56:41.256636459 +0000 UTC m=+7399.772434086" watchObservedRunningTime="2026-02-27 18:56:41.27364046 +0000 UTC m=+7399.789438087" Feb 27 18:56:47 crc kubenswrapper[4708]: I0227 18:56:47.046237 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:47 crc kubenswrapper[4708]: I0227 18:56:47.047520 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:47 crc kubenswrapper[4708]: I0227 18:56:47.228496 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:56:47 crc kubenswrapper[4708]: E0227 18:56:47.228911 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:56:48 crc kubenswrapper[4708]: I0227 18:56:48.094003 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hw6bq" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="registry-server" probeResult="failure" output=< Feb 27 18:56:48 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 18:56:48 crc kubenswrapper[4708]: > Feb 27 18:56:57 crc kubenswrapper[4708]: I0227 18:56:57.125932 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:57 crc kubenswrapper[4708]: I0227 18:56:57.189305 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:57 crc kubenswrapper[4708]: I0227 18:56:57.374916 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hw6bq"] Feb 27 18:56:58 crc kubenswrapper[4708]: I0227 18:56:58.422240 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hw6bq" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="registry-server" containerID="cri-o://449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32" gracePeriod=2 Feb 27 18:56:58 crc kubenswrapper[4708]: I0227 18:56:58.979257 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.180910 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-utilities\") pod \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.181082 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-catalog-content\") pod \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.181423 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pc6rm\" (UniqueName: \"kubernetes.io/projected/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-kube-api-access-pc6rm\") pod \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\" (UID: \"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c\") " Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.181911 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-utilities" (OuterVolumeSpecName: "utilities") pod "8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" (UID: "8d6f5bbc-61bb-44eb-9076-28cb6a8be18c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.182166 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.186879 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-kube-api-access-pc6rm" (OuterVolumeSpecName: "kube-api-access-pc6rm") pod "8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" (UID: "8d6f5bbc-61bb-44eb-9076-28cb6a8be18c"). InnerVolumeSpecName "kube-api-access-pc6rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.283385 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pc6rm\" (UniqueName: \"kubernetes.io/projected/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-kube-api-access-pc6rm\") on node \"crc\" DevicePath \"\"" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.320435 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" (UID: "8d6f5bbc-61bb-44eb-9076-28cb6a8be18c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.385729 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.434559 4708 generic.go:334] "Generic (PLEG): container finished" podID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerID="449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32" exitCode=0 Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.434606 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw6bq" event={"ID":"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c","Type":"ContainerDied","Data":"449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32"} Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.434637 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hw6bq" event={"ID":"8d6f5bbc-61bb-44eb-9076-28cb6a8be18c","Type":"ContainerDied","Data":"68a6473cd9fc1c4e2f1e722d125d822323d8c757b0f50b6e1a01c69913a9b529"} Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.434657 4708 scope.go:117] "RemoveContainer" containerID="449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.434723 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hw6bq" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.456505 4708 scope.go:117] "RemoveContainer" containerID="45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.488738 4708 scope.go:117] "RemoveContainer" containerID="70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.569502 4708 scope.go:117] "RemoveContainer" containerID="449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.569520 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hw6bq"] Feb 27 18:56:59 crc kubenswrapper[4708]: E0227 18:56:59.570230 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32\": container with ID starting with 449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32 not found: ID does not exist" containerID="449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.570269 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32"} err="failed to get container status \"449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32\": rpc error: code = NotFound desc = could not find container \"449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32\": container with ID starting with 449160b9f89e06887944ad87dfab94523f5b87e0e6ceffcb92f54aa330027e32 not found: ID does not exist" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.570298 4708 scope.go:117] "RemoveContainer" containerID="45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e" Feb 27 18:56:59 crc kubenswrapper[4708]: E0227 18:56:59.570699 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e\": container with ID starting with 45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e not found: ID does not exist" containerID="45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.570726 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e"} err="failed to get container status \"45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e\": rpc error: code = NotFound desc = could not find container \"45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e\": container with ID starting with 45d37888e65acf2ac2ab696468e6078de3b226d9cb47536c12abacfee45caf7e not found: ID does not exist" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.570746 4708 scope.go:117] "RemoveContainer" containerID="70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd" Feb 27 18:56:59 crc kubenswrapper[4708]: E0227 18:56:59.571878 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd\": container with ID starting with 70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd not found: ID does not exist" containerID="70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.571916 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd"} err="failed to get container status \"70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd\": rpc error: code = NotFound desc = could not find container \"70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd\": container with ID starting with 70e6d561ff544f87db9961425bac811428ae0af38783a06eeeaaeffb970010bd not found: ID does not exist" Feb 27 18:56:59 crc kubenswrapper[4708]: I0227 18:56:59.584628 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hw6bq"] Feb 27 18:57:00 crc kubenswrapper[4708]: I0227 18:57:00.246275 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" path="/var/lib/kubelet/pods/8d6f5bbc-61bb-44eb-9076-28cb6a8be18c/volumes" Feb 27 18:57:02 crc kubenswrapper[4708]: I0227 18:57:02.237657 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:57:02 crc kubenswrapper[4708]: E0227 18:57:02.238474 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:57:07 crc kubenswrapper[4708]: I0227 18:57:07.972019 4708 scope.go:117] "RemoveContainer" containerID="8e3ffb60034da3100fbd66005b1861a0b57380f96ad4df825fcf4afe793d4dd6" Feb 27 18:57:15 crc kubenswrapper[4708]: I0227 18:57:15.229415 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:57:15 crc kubenswrapper[4708]: E0227 18:57:15.230273 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:57:28 crc kubenswrapper[4708]: I0227 18:57:28.228430 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:57:28 crc kubenswrapper[4708]: E0227 18:57:28.229278 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:57:42 crc kubenswrapper[4708]: I0227 18:57:42.266224 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:57:42 crc kubenswrapper[4708]: E0227 18:57:42.267640 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:57:56 crc kubenswrapper[4708]: I0227 18:57:56.229674 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:57:56 crc kubenswrapper[4708]: E0227 18:57:56.230693 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.146177 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536978-4cmdp"] Feb 27 18:58:00 crc kubenswrapper[4708]: E0227 18:58:00.147635 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="registry-server" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.147652 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="registry-server" Feb 27 18:58:00 crc kubenswrapper[4708]: E0227 18:58:00.147684 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="extract-utilities" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.147691 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="extract-utilities" Feb 27 18:58:00 crc kubenswrapper[4708]: E0227 18:58:00.147713 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="extract-content" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.147718 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="extract-content" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.147910 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d6f5bbc-61bb-44eb-9076-28cb6a8be18c" containerName="registry-server" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.148582 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-4cmdp" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.153300 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.153664 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.153896 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.168082 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-4cmdp"] Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.258284 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sthw8\" (UniqueName: \"kubernetes.io/projected/eca96e98-47ad-4b0c-a231-f954ef746657-kube-api-access-sthw8\") pod \"auto-csr-approver-29536978-4cmdp\" (UID: \"eca96e98-47ad-4b0c-a231-f954ef746657\") " pod="openshift-infra/auto-csr-approver-29536978-4cmdp" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.360841 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sthw8\" (UniqueName: \"kubernetes.io/projected/eca96e98-47ad-4b0c-a231-f954ef746657-kube-api-access-sthw8\") pod \"auto-csr-approver-29536978-4cmdp\" (UID: \"eca96e98-47ad-4b0c-a231-f954ef746657\") " pod="openshift-infra/auto-csr-approver-29536978-4cmdp" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.383253 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sthw8\" (UniqueName: \"kubernetes.io/projected/eca96e98-47ad-4b0c-a231-f954ef746657-kube-api-access-sthw8\") pod \"auto-csr-approver-29536978-4cmdp\" (UID: \"eca96e98-47ad-4b0c-a231-f954ef746657\") " pod="openshift-infra/auto-csr-approver-29536978-4cmdp" Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.469550 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-4cmdp" Feb 27 18:58:00 crc kubenswrapper[4708]: W0227 18:58:00.968916 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeca96e98_47ad_4b0c_a231_f954ef746657.slice/crio-b22b12b71499d0348310c7ec4c731210ac9dc600441c4e231a2e5832287232b9 WatchSource:0}: Error finding container b22b12b71499d0348310c7ec4c731210ac9dc600441c4e231a2e5832287232b9: Status 404 returned error can't find the container with id b22b12b71499d0348310c7ec4c731210ac9dc600441c4e231a2e5832287232b9 Feb 27 18:58:00 crc kubenswrapper[4708]: I0227 18:58:00.973491 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-4cmdp"] Feb 27 18:58:01 crc kubenswrapper[4708]: I0227 18:58:01.220529 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536978-4cmdp" event={"ID":"eca96e98-47ad-4b0c-a231-f954ef746657","Type":"ContainerStarted","Data":"b22b12b71499d0348310c7ec4c731210ac9dc600441c4e231a2e5832287232b9"} Feb 27 18:58:03 crc kubenswrapper[4708]: I0227 18:58:03.246328 4708 generic.go:334] "Generic (PLEG): container finished" podID="eca96e98-47ad-4b0c-a231-f954ef746657" containerID="d5a7b90b3093ee0e33433ca62f74574d69bb51259791a179520e53d934ccb3e2" exitCode=0 Feb 27 18:58:03 crc kubenswrapper[4708]: I0227 18:58:03.246467 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536978-4cmdp" event={"ID":"eca96e98-47ad-4b0c-a231-f954ef746657","Type":"ContainerDied","Data":"d5a7b90b3093ee0e33433ca62f74574d69bb51259791a179520e53d934ccb3e2"} Feb 27 18:58:04 crc kubenswrapper[4708]: I0227 18:58:04.685998 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-4cmdp" Feb 27 18:58:04 crc kubenswrapper[4708]: I0227 18:58:04.758369 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sthw8\" (UniqueName: \"kubernetes.io/projected/eca96e98-47ad-4b0c-a231-f954ef746657-kube-api-access-sthw8\") pod \"eca96e98-47ad-4b0c-a231-f954ef746657\" (UID: \"eca96e98-47ad-4b0c-a231-f954ef746657\") " Feb 27 18:58:04 crc kubenswrapper[4708]: I0227 18:58:04.767573 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eca96e98-47ad-4b0c-a231-f954ef746657-kube-api-access-sthw8" (OuterVolumeSpecName: "kube-api-access-sthw8") pod "eca96e98-47ad-4b0c-a231-f954ef746657" (UID: "eca96e98-47ad-4b0c-a231-f954ef746657"). InnerVolumeSpecName "kube-api-access-sthw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 18:58:04 crc kubenswrapper[4708]: I0227 18:58:04.861090 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sthw8\" (UniqueName: \"kubernetes.io/projected/eca96e98-47ad-4b0c-a231-f954ef746657-kube-api-access-sthw8\") on node \"crc\" DevicePath \"\"" Feb 27 18:58:05 crc kubenswrapper[4708]: I0227 18:58:05.271884 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536978-4cmdp" event={"ID":"eca96e98-47ad-4b0c-a231-f954ef746657","Type":"ContainerDied","Data":"b22b12b71499d0348310c7ec4c731210ac9dc600441c4e231a2e5832287232b9"} Feb 27 18:58:05 crc kubenswrapper[4708]: I0227 18:58:05.272147 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b22b12b71499d0348310c7ec4c731210ac9dc600441c4e231a2e5832287232b9" Feb 27 18:58:05 crc kubenswrapper[4708]: I0227 18:58:05.271932 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536978-4cmdp" Feb 27 18:58:05 crc kubenswrapper[4708]: I0227 18:58:05.764054 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-v4hn4"] Feb 27 18:58:05 crc kubenswrapper[4708]: I0227 18:58:05.774301 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536972-v4hn4"] Feb 27 18:58:06 crc kubenswrapper[4708]: I0227 18:58:06.244827 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff" path="/var/lib/kubelet/pods/d4ab08d3-b48e-4b33-9c6c-9ea57b22cdff/volumes" Feb 27 18:58:08 crc kubenswrapper[4708]: I0227 18:58:08.097913 4708 scope.go:117] "RemoveContainer" containerID="06e94ba88ff441236b678cbf333e7d60bb20eded19cbbd00d0c6a0aa45fa9131" Feb 27 18:58:09 crc kubenswrapper[4708]: I0227 18:58:09.229091 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:58:09 crc kubenswrapper[4708]: E0227 18:58:09.229433 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:58:21 crc kubenswrapper[4708]: I0227 18:58:21.228572 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:58:21 crc kubenswrapper[4708]: E0227 18:58:21.230568 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:58:36 crc kubenswrapper[4708]: I0227 18:58:36.228963 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:58:36 crc kubenswrapper[4708]: E0227 18:58:36.230419 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:58:51 crc kubenswrapper[4708]: I0227 18:58:51.228654 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:58:51 crc kubenswrapper[4708]: E0227 18:58:51.229444 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:59:04 crc kubenswrapper[4708]: I0227 18:59:04.229419 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:59:04 crc kubenswrapper[4708]: E0227 18:59:04.230445 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:59:18 crc kubenswrapper[4708]: I0227 18:59:18.229002 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:59:18 crc kubenswrapper[4708]: E0227 18:59:18.230035 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:59:30 crc kubenswrapper[4708]: I0227 18:59:30.228595 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:59:30 crc kubenswrapper[4708]: E0227 18:59:30.229715 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 18:59:43 crc kubenswrapper[4708]: I0227 18:59:43.228310 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 18:59:44 crc kubenswrapper[4708]: I0227 18:59:44.411339 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"995c06fab749458d7934e418b18338358959b15d6a8dcdc365c46a871a147a79"} Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.159072 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536980-pxjgn"] Feb 27 19:00:00 crc kubenswrapper[4708]: E0227 19:00:00.160185 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eca96e98-47ad-4b0c-a231-f954ef746657" containerName="oc" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.160202 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="eca96e98-47ad-4b0c-a231-f954ef746657" containerName="oc" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.160472 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="eca96e98-47ad-4b0c-a231-f954ef746657" containerName="oc" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.161409 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.163671 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.164175 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.165595 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.172120 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx"] Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.174098 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.182412 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.182905 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.183234 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-pxjgn"] Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.196271 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx"] Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.277388 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91e758b6-5dbd-44f5-a13b-c1894d7a129c-config-volume\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.277733 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wln5q\" (UniqueName: \"kubernetes.io/projected/f2832e08-72ae-4762-bd42-682a07d95f26-kube-api-access-wln5q\") pod \"auto-csr-approver-29536980-pxjgn\" (UID: \"f2832e08-72ae-4762-bd42-682a07d95f26\") " pod="openshift-infra/auto-csr-approver-29536980-pxjgn" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.278069 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prjls\" (UniqueName: \"kubernetes.io/projected/91e758b6-5dbd-44f5-a13b-c1894d7a129c-kube-api-access-prjls\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.278213 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91e758b6-5dbd-44f5-a13b-c1894d7a129c-secret-volume\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.380785 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91e758b6-5dbd-44f5-a13b-c1894d7a129c-config-volume\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.380966 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wln5q\" (UniqueName: \"kubernetes.io/projected/f2832e08-72ae-4762-bd42-682a07d95f26-kube-api-access-wln5q\") pod \"auto-csr-approver-29536980-pxjgn\" (UID: \"f2832e08-72ae-4762-bd42-682a07d95f26\") " pod="openshift-infra/auto-csr-approver-29536980-pxjgn" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.381283 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prjls\" (UniqueName: \"kubernetes.io/projected/91e758b6-5dbd-44f5-a13b-c1894d7a129c-kube-api-access-prjls\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.381346 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91e758b6-5dbd-44f5-a13b-c1894d7a129c-secret-volume\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.382663 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91e758b6-5dbd-44f5-a13b-c1894d7a129c-config-volume\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.390602 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91e758b6-5dbd-44f5-a13b-c1894d7a129c-secret-volume\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.401552 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prjls\" (UniqueName: \"kubernetes.io/projected/91e758b6-5dbd-44f5-a13b-c1894d7a129c-kube-api-access-prjls\") pod \"collect-profiles-29536980-xp4fx\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.403375 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wln5q\" (UniqueName: \"kubernetes.io/projected/f2832e08-72ae-4762-bd42-682a07d95f26-kube-api-access-wln5q\") pod \"auto-csr-approver-29536980-pxjgn\" (UID: \"f2832e08-72ae-4762-bd42-682a07d95f26\") " pod="openshift-infra/auto-csr-approver-29536980-pxjgn" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.496133 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" Feb 27 19:00:00 crc kubenswrapper[4708]: I0227 19:00:00.508793 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:01 crc kubenswrapper[4708]: I0227 19:00:01.009936 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-pxjgn"] Feb 27 19:00:01 crc kubenswrapper[4708]: I0227 19:00:01.097930 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx"] Feb 27 19:00:01 crc kubenswrapper[4708]: W0227 19:00:01.099130 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e758b6_5dbd_44f5_a13b_c1894d7a129c.slice/crio-94c2a8c7804d02cd5b88525083443db911f92af2b1105e8e5dc7a835e23a9d54 WatchSource:0}: Error finding container 94c2a8c7804d02cd5b88525083443db911f92af2b1105e8e5dc7a835e23a9d54: Status 404 returned error can't find the container with id 94c2a8c7804d02cd5b88525083443db911f92af2b1105e8e5dc7a835e23a9d54 Feb 27 19:00:01 crc kubenswrapper[4708]: I0227 19:00:01.587307 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" event={"ID":"f2832e08-72ae-4762-bd42-682a07d95f26","Type":"ContainerStarted","Data":"0c06f979333ebc37506840680760348ea5ec38e54db885df65c203cb0639f04e"} Feb 27 19:00:01 crc kubenswrapper[4708]: I0227 19:00:01.590441 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" event={"ID":"91e758b6-5dbd-44f5-a13b-c1894d7a129c","Type":"ContainerStarted","Data":"02c4aed4805fddc9aa7382813e82c1d986db4261a744a410abe0b9d5636f96ad"} Feb 27 19:00:01 crc kubenswrapper[4708]: I0227 19:00:01.590505 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" event={"ID":"91e758b6-5dbd-44f5-a13b-c1894d7a129c","Type":"ContainerStarted","Data":"94c2a8c7804d02cd5b88525083443db911f92af2b1105e8e5dc7a835e23a9d54"} Feb 27 19:00:01 crc kubenswrapper[4708]: I0227 19:00:01.606309 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" podStartSLOduration=1.6062868909999999 podStartE2EDuration="1.606286891s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:00:01.604595943 +0000 UTC m=+7600.120393540" watchObservedRunningTime="2026-02-27 19:00:01.606286891 +0000 UTC m=+7600.122084468" Feb 27 19:00:02 crc kubenswrapper[4708]: I0227 19:00:02.599453 4708 generic.go:334] "Generic (PLEG): container finished" podID="91e758b6-5dbd-44f5-a13b-c1894d7a129c" containerID="02c4aed4805fddc9aa7382813e82c1d986db4261a744a410abe0b9d5636f96ad" exitCode=0 Feb 27 19:00:02 crc kubenswrapper[4708]: I0227 19:00:02.599505 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" event={"ID":"91e758b6-5dbd-44f5-a13b-c1894d7a129c","Type":"ContainerDied","Data":"02c4aed4805fddc9aa7382813e82c1d986db4261a744a410abe0b9d5636f96ad"} Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.100117 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.273088 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prjls\" (UniqueName: \"kubernetes.io/projected/91e758b6-5dbd-44f5-a13b-c1894d7a129c-kube-api-access-prjls\") pod \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.273441 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91e758b6-5dbd-44f5-a13b-c1894d7a129c-config-volume\") pod \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.273478 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91e758b6-5dbd-44f5-a13b-c1894d7a129c-secret-volume\") pod \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\" (UID: \"91e758b6-5dbd-44f5-a13b-c1894d7a129c\") " Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.273951 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91e758b6-5dbd-44f5-a13b-c1894d7a129c-config-volume" (OuterVolumeSpecName: "config-volume") pod "91e758b6-5dbd-44f5-a13b-c1894d7a129c" (UID: "91e758b6-5dbd-44f5-a13b-c1894d7a129c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.274385 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/91e758b6-5dbd-44f5-a13b-c1894d7a129c-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.279953 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91e758b6-5dbd-44f5-a13b-c1894d7a129c-kube-api-access-prjls" (OuterVolumeSpecName: "kube-api-access-prjls") pod "91e758b6-5dbd-44f5-a13b-c1894d7a129c" (UID: "91e758b6-5dbd-44f5-a13b-c1894d7a129c"). InnerVolumeSpecName "kube-api-access-prjls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.280902 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/91e758b6-5dbd-44f5-a13b-c1894d7a129c-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "91e758b6-5dbd-44f5-a13b-c1894d7a129c" (UID: "91e758b6-5dbd-44f5-a13b-c1894d7a129c"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.376056 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prjls\" (UniqueName: \"kubernetes.io/projected/91e758b6-5dbd-44f5-a13b-c1894d7a129c-kube-api-access-prjls\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.376273 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/91e758b6-5dbd-44f5-a13b-c1894d7a129c-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.639155 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" event={"ID":"91e758b6-5dbd-44f5-a13b-c1894d7a129c","Type":"ContainerDied","Data":"94c2a8c7804d02cd5b88525083443db911f92af2b1105e8e5dc7a835e23a9d54"} Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.639203 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94c2a8c7804d02cd5b88525083443db911f92af2b1105e8e5dc7a835e23a9d54" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.639218 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536980-xp4fx" Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.720432 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8"] Feb 27 19:00:04 crc kubenswrapper[4708]: I0227 19:00:04.734353 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536935-88fh8"] Feb 27 19:00:06 crc kubenswrapper[4708]: I0227 19:00:06.214670 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cloudkitty-lokistack-compactor-grpc" Feb 27 19:00:06 crc kubenswrapper[4708]: I0227 19:00:06.238851 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f784ada7-bb58-4319-afd9-fb504136a164" path="/var/lib/kubelet/pods/f784ada7-bb58-4319-afd9-fb504136a164/volumes" Feb 27 19:00:08 crc kubenswrapper[4708]: I0227 19:00:08.222798 4708 scope.go:117] "RemoveContainer" containerID="dba1145866de3fa9a54ace61a6affcc25c3c17af261a2377a8e7b7ace5e3ec2c" Feb 27 19:00:14 crc kubenswrapper[4708]: I0227 19:00:14.741388 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" event={"ID":"f2832e08-72ae-4762-bd42-682a07d95f26","Type":"ContainerStarted","Data":"904688ceeb11365d0f852b98f96c827f0bf7d5915574d913bde3f0f15337fa5a"} Feb 27 19:00:14 crc kubenswrapper[4708]: I0227 19:00:14.768441 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" podStartSLOduration=1.546216454 podStartE2EDuration="14.768422475s" podCreationTimestamp="2026-02-27 19:00:00 +0000 UTC" firstStartedPulling="2026-02-27 19:00:01.018072178 +0000 UTC m=+7599.533869785" lastFinishedPulling="2026-02-27 19:00:14.240278219 +0000 UTC m=+7612.756075806" observedRunningTime="2026-02-27 19:00:14.759483562 +0000 UTC m=+7613.275281159" watchObservedRunningTime="2026-02-27 19:00:14.768422475 +0000 UTC m=+7613.284220072" Feb 27 19:00:15 crc kubenswrapper[4708]: I0227 19:00:15.755291 4708 generic.go:334] "Generic (PLEG): container finished" podID="f2832e08-72ae-4762-bd42-682a07d95f26" containerID="904688ceeb11365d0f852b98f96c827f0bf7d5915574d913bde3f0f15337fa5a" exitCode=0 Feb 27 19:00:15 crc kubenswrapper[4708]: I0227 19:00:15.755368 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" event={"ID":"f2832e08-72ae-4762-bd42-682a07d95f26","Type":"ContainerDied","Data":"904688ceeb11365d0f852b98f96c827f0bf7d5915574d913bde3f0f15337fa5a"} Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.305205 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.352750 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wln5q\" (UniqueName: \"kubernetes.io/projected/f2832e08-72ae-4762-bd42-682a07d95f26-kube-api-access-wln5q\") pod \"f2832e08-72ae-4762-bd42-682a07d95f26\" (UID: \"f2832e08-72ae-4762-bd42-682a07d95f26\") " Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.358818 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2832e08-72ae-4762-bd42-682a07d95f26-kube-api-access-wln5q" (OuterVolumeSpecName: "kube-api-access-wln5q") pod "f2832e08-72ae-4762-bd42-682a07d95f26" (UID: "f2832e08-72ae-4762-bd42-682a07d95f26"). InnerVolumeSpecName "kube-api-access-wln5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.455021 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wln5q\" (UniqueName: \"kubernetes.io/projected/f2832e08-72ae-4762-bd42-682a07d95f26-kube-api-access-wln5q\") on node \"crc\" DevicePath \"\"" Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.775465 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" event={"ID":"f2832e08-72ae-4762-bd42-682a07d95f26","Type":"ContainerDied","Data":"0c06f979333ebc37506840680760348ea5ec38e54db885df65c203cb0639f04e"} Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.775500 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c06f979333ebc37506840680760348ea5ec38e54db885df65c203cb0639f04e" Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.775741 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536980-pxjgn" Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.822943 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-4vqhn"] Feb 27 19:00:17 crc kubenswrapper[4708]: I0227 19:00:17.831498 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536974-4vqhn"] Feb 27 19:00:18 crc kubenswrapper[4708]: I0227 19:00:18.244702 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13fe94df-976c-47e0-a7a4-697c38d4eac9" path="/var/lib/kubelet/pods/13fe94df-976c-47e0-a7a4-697c38d4eac9/volumes" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.159862 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29536981-c7wzx"] Feb 27 19:01:00 crc kubenswrapper[4708]: E0227 19:01:00.160686 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2832e08-72ae-4762-bd42-682a07d95f26" containerName="oc" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.160699 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2832e08-72ae-4762-bd42-682a07d95f26" containerName="oc" Feb 27 19:01:00 crc kubenswrapper[4708]: E0227 19:01:00.160716 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91e758b6-5dbd-44f5-a13b-c1894d7a129c" containerName="collect-profiles" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.160723 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="91e758b6-5dbd-44f5-a13b-c1894d7a129c" containerName="collect-profiles" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.160909 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2832e08-72ae-4762-bd42-682a07d95f26" containerName="oc" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.160934 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="91e758b6-5dbd-44f5-a13b-c1894d7a129c" containerName="collect-profiles" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.161602 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.182154 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536981-c7wzx"] Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.198481 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-config-data\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.198586 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-fernet-keys\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.198764 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-combined-ca-bundle\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.199176 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cs56k\" (UniqueName: \"kubernetes.io/projected/478e58b4-a3ac-4474-88b8-5d289430de52-kube-api-access-cs56k\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.301310 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cs56k\" (UniqueName: \"kubernetes.io/projected/478e58b4-a3ac-4474-88b8-5d289430de52-kube-api-access-cs56k\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.301953 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-config-data\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.302066 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-fernet-keys\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.302720 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-combined-ca-bundle\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.308076 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-config-data\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.308087 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-fernet-keys\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.309527 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-combined-ca-bundle\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.322921 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cs56k\" (UniqueName: \"kubernetes.io/projected/478e58b4-a3ac-4474-88b8-5d289430de52-kube-api-access-cs56k\") pod \"keystone-cron-29536981-c7wzx\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:00 crc kubenswrapper[4708]: I0227 19:01:00.490598 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:01 crc kubenswrapper[4708]: I0227 19:01:01.071013 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29536981-c7wzx"] Feb 27 19:01:01 crc kubenswrapper[4708]: I0227 19:01:01.281073 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536981-c7wzx" event={"ID":"478e58b4-a3ac-4474-88b8-5d289430de52","Type":"ContainerStarted","Data":"a13ae87054aeb0e47b3d802db5ac5031122e86aa40e9c273ff3d0f3229bb91e0"} Feb 27 19:01:02 crc kubenswrapper[4708]: I0227 19:01:02.296440 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536981-c7wzx" event={"ID":"478e58b4-a3ac-4474-88b8-5d289430de52","Type":"ContainerStarted","Data":"ba93d56de7bad16f8ab3c067eb5eb57e697c68cbfa8e06cb1f16ff990a9569a2"} Feb 27 19:01:02 crc kubenswrapper[4708]: I0227 19:01:02.327073 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29536981-c7wzx" podStartSLOduration=2.327052335 podStartE2EDuration="2.327052335s" podCreationTimestamp="2026-02-27 19:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:01:02.315977703 +0000 UTC m=+7660.831775320" watchObservedRunningTime="2026-02-27 19:01:02.327052335 +0000 UTC m=+7660.842849922" Feb 27 19:01:05 crc kubenswrapper[4708]: I0227 19:01:05.332253 4708 generic.go:334] "Generic (PLEG): container finished" podID="478e58b4-a3ac-4474-88b8-5d289430de52" containerID="ba93d56de7bad16f8ab3c067eb5eb57e697c68cbfa8e06cb1f16ff990a9569a2" exitCode=0 Feb 27 19:01:05 crc kubenswrapper[4708]: I0227 19:01:05.332814 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536981-c7wzx" event={"ID":"478e58b4-a3ac-4474-88b8-5d289430de52","Type":"ContainerDied","Data":"ba93d56de7bad16f8ab3c067eb5eb57e697c68cbfa8e06cb1f16ff990a9569a2"} Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.772090 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.938460 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-config-data\") pod \"478e58b4-a3ac-4474-88b8-5d289430de52\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.938664 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-combined-ca-bundle\") pod \"478e58b4-a3ac-4474-88b8-5d289430de52\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.938770 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cs56k\" (UniqueName: \"kubernetes.io/projected/478e58b4-a3ac-4474-88b8-5d289430de52-kube-api-access-cs56k\") pod \"478e58b4-a3ac-4474-88b8-5d289430de52\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.938820 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-fernet-keys\") pod \"478e58b4-a3ac-4474-88b8-5d289430de52\" (UID: \"478e58b4-a3ac-4474-88b8-5d289430de52\") " Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.950513 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/478e58b4-a3ac-4474-88b8-5d289430de52-kube-api-access-cs56k" (OuterVolumeSpecName: "kube-api-access-cs56k") pod "478e58b4-a3ac-4474-88b8-5d289430de52" (UID: "478e58b4-a3ac-4474-88b8-5d289430de52"). InnerVolumeSpecName "kube-api-access-cs56k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.959294 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "478e58b4-a3ac-4474-88b8-5d289430de52" (UID: "478e58b4-a3ac-4474-88b8-5d289430de52"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:01:06 crc kubenswrapper[4708]: I0227 19:01:06.974765 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "478e58b4-a3ac-4474-88b8-5d289430de52" (UID: "478e58b4-a3ac-4474-88b8-5d289430de52"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.038182 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-config-data" (OuterVolumeSpecName: "config-data") pod "478e58b4-a3ac-4474-88b8-5d289430de52" (UID: "478e58b4-a3ac-4474-88b8-5d289430de52"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.041722 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.041751 4708 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.041763 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cs56k\" (UniqueName: \"kubernetes.io/projected/478e58b4-a3ac-4474-88b8-5d289430de52-kube-api-access-cs56k\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.041774 4708 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/478e58b4-a3ac-4474-88b8-5d289430de52-fernet-keys\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.353096 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29536981-c7wzx" event={"ID":"478e58b4-a3ac-4474-88b8-5d289430de52","Type":"ContainerDied","Data":"a13ae87054aeb0e47b3d802db5ac5031122e86aa40e9c273ff3d0f3229bb91e0"} Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.353132 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a13ae87054aeb0e47b3d802db5ac5031122e86aa40e9c273ff3d0f3229bb91e0" Feb 27 19:01:07 crc kubenswrapper[4708]: I0227 19:01:07.353166 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29536981-c7wzx" Feb 27 19:01:08 crc kubenswrapper[4708]: I0227 19:01:08.297903 4708 scope.go:117] "RemoveContainer" containerID="8f982f0eb50667723a8f9d5355bf8b2b63e19f85df18ccbd4d63a8e4b7d9a2ac" Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.842925 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p25ml"] Feb 27 19:01:37 crc kubenswrapper[4708]: E0227 19:01:37.845735 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="478e58b4-a3ac-4474-88b8-5d289430de52" containerName="keystone-cron" Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.845866 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="478e58b4-a3ac-4474-88b8-5d289430de52" containerName="keystone-cron" Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.846218 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="478e58b4-a3ac-4474-88b8-5d289430de52" containerName="keystone-cron" Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.848403 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.884623 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p25ml"] Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.960766 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-catalog-content\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.960923 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-utilities\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:37 crc kubenswrapper[4708]: I0227 19:01:37.960954 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njglg\" (UniqueName: \"kubernetes.io/projected/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-kube-api-access-njglg\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.063582 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-utilities\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.063655 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-njglg\" (UniqueName: \"kubernetes.io/projected/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-kube-api-access-njglg\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.063956 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-catalog-content\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.064117 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-utilities\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.064374 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-catalog-content\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.088632 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-njglg\" (UniqueName: \"kubernetes.io/projected/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-kube-api-access-njglg\") pod \"certified-operators-p25ml\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.183584 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.708205 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p25ml"] Feb 27 19:01:38 crc kubenswrapper[4708]: I0227 19:01:38.743608 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p25ml" event={"ID":"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334","Type":"ContainerStarted","Data":"8a4ae88aac9a91122b62446ec5f28893887631e7918bf29b2a7f1febcb052232"} Feb 27 19:01:39 crc kubenswrapper[4708]: I0227 19:01:39.756432 4708 generic.go:334] "Generic (PLEG): container finished" podID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerID="62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85" exitCode=0 Feb 27 19:01:39 crc kubenswrapper[4708]: I0227 19:01:39.756491 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p25ml" event={"ID":"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334","Type":"ContainerDied","Data":"62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85"} Feb 27 19:01:39 crc kubenswrapper[4708]: I0227 19:01:39.759601 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.033516 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4g87q"] Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.036234 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.061340 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g87q"] Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.110632 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mcps\" (UniqueName: \"kubernetes.io/projected/61e761be-74e5-4dbb-b49b-15f597efc0e2-kube-api-access-9mcps\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.110684 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-utilities\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.110741 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-catalog-content\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.213999 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-catalog-content\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.214150 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mcps\" (UniqueName: \"kubernetes.io/projected/61e761be-74e5-4dbb-b49b-15f597efc0e2-kube-api-access-9mcps\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.214187 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-utilities\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.214637 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-utilities\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.214709 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-catalog-content\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.240747 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mcps\" (UniqueName: \"kubernetes.io/projected/61e761be-74e5-4dbb-b49b-15f597efc0e2-kube-api-access-9mcps\") pod \"community-operators-4g87q\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.356249 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.766638 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p25ml" event={"ID":"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334","Type":"ContainerStarted","Data":"1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1"} Feb 27 19:01:40 crc kubenswrapper[4708]: I0227 19:01:40.893290 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4g87q"] Feb 27 19:01:41 crc kubenswrapper[4708]: I0227 19:01:41.777102 4708 generic.go:334] "Generic (PLEG): container finished" podID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerID="fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38" exitCode=0 Feb 27 19:01:41 crc kubenswrapper[4708]: I0227 19:01:41.777303 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g87q" event={"ID":"61e761be-74e5-4dbb-b49b-15f597efc0e2","Type":"ContainerDied","Data":"fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38"} Feb 27 19:01:41 crc kubenswrapper[4708]: I0227 19:01:41.778166 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g87q" event={"ID":"61e761be-74e5-4dbb-b49b-15f597efc0e2","Type":"ContainerStarted","Data":"16c161676e77eafcec6a20b35620f3ba40f8e7abd26f61e476cf751ec954284a"} Feb 27 19:01:41 crc kubenswrapper[4708]: I0227 19:01:41.786906 4708 generic.go:334] "Generic (PLEG): container finished" podID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerID="1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1" exitCode=0 Feb 27 19:01:41 crc kubenswrapper[4708]: I0227 19:01:41.786975 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p25ml" event={"ID":"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334","Type":"ContainerDied","Data":"1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1"} Feb 27 19:01:43 crc kubenswrapper[4708]: I0227 19:01:43.818170 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g87q" event={"ID":"61e761be-74e5-4dbb-b49b-15f597efc0e2","Type":"ContainerStarted","Data":"f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498"} Feb 27 19:01:43 crc kubenswrapper[4708]: I0227 19:01:43.820723 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p25ml" event={"ID":"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334","Type":"ContainerStarted","Data":"374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc"} Feb 27 19:01:43 crc kubenswrapper[4708]: I0227 19:01:43.880262 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p25ml" podStartSLOduration=3.9394072060000003 podStartE2EDuration="6.880232321s" podCreationTimestamp="2026-02-27 19:01:37 +0000 UTC" firstStartedPulling="2026-02-27 19:01:39.759296594 +0000 UTC m=+7698.275094171" lastFinishedPulling="2026-02-27 19:01:42.700121659 +0000 UTC m=+7701.215919286" observedRunningTime="2026-02-27 19:01:43.862601744 +0000 UTC m=+7702.378399371" watchObservedRunningTime="2026-02-27 19:01:43.880232321 +0000 UTC m=+7702.396029948" Feb 27 19:01:45 crc kubenswrapper[4708]: I0227 19:01:45.844415 4708 generic.go:334] "Generic (PLEG): container finished" podID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerID="f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498" exitCode=0 Feb 27 19:01:45 crc kubenswrapper[4708]: I0227 19:01:45.844510 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g87q" event={"ID":"61e761be-74e5-4dbb-b49b-15f597efc0e2","Type":"ContainerDied","Data":"f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498"} Feb 27 19:01:46 crc kubenswrapper[4708]: I0227 19:01:46.857594 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g87q" event={"ID":"61e761be-74e5-4dbb-b49b-15f597efc0e2","Type":"ContainerStarted","Data":"67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c"} Feb 27 19:01:46 crc kubenswrapper[4708]: I0227 19:01:46.886170 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4g87q" podStartSLOduration=2.376684579 podStartE2EDuration="6.886151641s" podCreationTimestamp="2026-02-27 19:01:40 +0000 UTC" firstStartedPulling="2026-02-27 19:01:41.783947757 +0000 UTC m=+7700.299745344" lastFinishedPulling="2026-02-27 19:01:46.293414789 +0000 UTC m=+7704.809212406" observedRunningTime="2026-02-27 19:01:46.881488809 +0000 UTC m=+7705.397286406" watchObservedRunningTime="2026-02-27 19:01:46.886151641 +0000 UTC m=+7705.401949238" Feb 27 19:01:48 crc kubenswrapper[4708]: I0227 19:01:48.184063 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:48 crc kubenswrapper[4708]: I0227 19:01:48.184246 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:48 crc kubenswrapper[4708]: I0227 19:01:48.250995 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:48 crc kubenswrapper[4708]: I0227 19:01:48.937784 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:50 crc kubenswrapper[4708]: I0227 19:01:50.219974 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p25ml"] Feb 27 19:01:50 crc kubenswrapper[4708]: I0227 19:01:50.357077 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:50 crc kubenswrapper[4708]: I0227 19:01:50.357153 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:50 crc kubenswrapper[4708]: I0227 19:01:50.434745 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:01:51 crc kubenswrapper[4708]: I0227 19:01:51.907366 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p25ml" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="registry-server" containerID="cri-o://374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc" gracePeriod=2 Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.482930 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.587524 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-utilities\") pod \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.587594 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-catalog-content\") pod \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.587634 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njglg\" (UniqueName: \"kubernetes.io/projected/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-kube-api-access-njglg\") pod \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\" (UID: \"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334\") " Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.591597 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-utilities" (OuterVolumeSpecName: "utilities") pod "4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" (UID: "4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.593574 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-kube-api-access-njglg" (OuterVolumeSpecName: "kube-api-access-njglg") pod "4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" (UID: "4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334"). InnerVolumeSpecName "kube-api-access-njglg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.646772 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" (UID: "4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.690296 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.690586 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.690674 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-njglg\" (UniqueName: \"kubernetes.io/projected/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334-kube-api-access-njglg\") on node \"crc\" DevicePath \"\"" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.924373 4708 generic.go:334] "Generic (PLEG): container finished" podID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerID="374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc" exitCode=0 Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.924433 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p25ml" event={"ID":"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334","Type":"ContainerDied","Data":"374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc"} Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.924468 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p25ml" event={"ID":"4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334","Type":"ContainerDied","Data":"8a4ae88aac9a91122b62446ec5f28893887631e7918bf29b2a7f1febcb052232"} Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.924488 4708 scope.go:117] "RemoveContainer" containerID="374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.924496 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p25ml" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.955269 4708 scope.go:117] "RemoveContainer" containerID="1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1" Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.984529 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p25ml"] Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.998167 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p25ml"] Feb 27 19:01:52 crc kubenswrapper[4708]: I0227 19:01:52.999046 4708 scope.go:117] "RemoveContainer" containerID="62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85" Feb 27 19:01:53 crc kubenswrapper[4708]: I0227 19:01:53.054447 4708 scope.go:117] "RemoveContainer" containerID="374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc" Feb 27 19:01:53 crc kubenswrapper[4708]: E0227 19:01:53.055002 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc\": container with ID starting with 374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc not found: ID does not exist" containerID="374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc" Feb 27 19:01:53 crc kubenswrapper[4708]: I0227 19:01:53.055066 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc"} err="failed to get container status \"374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc\": rpc error: code = NotFound desc = could not find container \"374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc\": container with ID starting with 374ea53dd75b3654f57af5395c12bb45bf24ea4f8657b22fb747a7d9a2e9fefc not found: ID does not exist" Feb 27 19:01:53 crc kubenswrapper[4708]: I0227 19:01:53.055107 4708 scope.go:117] "RemoveContainer" containerID="1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1" Feb 27 19:01:53 crc kubenswrapper[4708]: E0227 19:01:53.055577 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1\": container with ID starting with 1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1 not found: ID does not exist" containerID="1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1" Feb 27 19:01:53 crc kubenswrapper[4708]: I0227 19:01:53.055616 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1"} err="failed to get container status \"1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1\": rpc error: code = NotFound desc = could not find container \"1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1\": container with ID starting with 1ed8aa8cdcd430f7906530261e8d240bd79bb837f2266982f037e32492132af1 not found: ID does not exist" Feb 27 19:01:53 crc kubenswrapper[4708]: I0227 19:01:53.055670 4708 scope.go:117] "RemoveContainer" containerID="62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85" Feb 27 19:01:53 crc kubenswrapper[4708]: E0227 19:01:53.056103 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85\": container with ID starting with 62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85 not found: ID does not exist" containerID="62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85" Feb 27 19:01:53 crc kubenswrapper[4708]: I0227 19:01:53.056139 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85"} err="failed to get container status \"62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85\": rpc error: code = NotFound desc = could not find container \"62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85\": container with ID starting with 62dde4840a800cb4495f22f18ee6e0ded26bf01e2713ea218e00d98fa6be5e85 not found: ID does not exist" Feb 27 19:01:54 crc kubenswrapper[4708]: I0227 19:01:54.248801 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" path="/var/lib/kubelet/pods/4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334/volumes" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.154844 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536982-n8h8w"] Feb 27 19:02:00 crc kubenswrapper[4708]: E0227 19:02:00.156280 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="extract-content" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.156308 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="extract-content" Feb 27 19:02:00 crc kubenswrapper[4708]: E0227 19:02:00.156339 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="extract-utilities" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.156354 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="extract-utilities" Feb 27 19:02:00 crc kubenswrapper[4708]: E0227 19:02:00.156401 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="registry-server" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.156414 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="registry-server" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.156828 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4755a08f-11c9-4b9e-9f1e-ed6bbbfc7334" containerName="registry-server" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.158338 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.163477 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.164409 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.167147 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.169895 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-n8h8w"] Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.284634 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5jwz\" (UniqueName: \"kubernetes.io/projected/4450ef08-2763-4625-8614-879f04ceb032-kube-api-access-v5jwz\") pod \"auto-csr-approver-29536982-n8h8w\" (UID: \"4450ef08-2763-4625-8614-879f04ceb032\") " pod="openshift-infra/auto-csr-approver-29536982-n8h8w" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.387570 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5jwz\" (UniqueName: \"kubernetes.io/projected/4450ef08-2763-4625-8614-879f04ceb032-kube-api-access-v5jwz\") pod \"auto-csr-approver-29536982-n8h8w\" (UID: \"4450ef08-2763-4625-8614-879f04ceb032\") " pod="openshift-infra/auto-csr-approver-29536982-n8h8w" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.411270 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5jwz\" (UniqueName: \"kubernetes.io/projected/4450ef08-2763-4625-8614-879f04ceb032-kube-api-access-v5jwz\") pod \"auto-csr-approver-29536982-n8h8w\" (UID: \"4450ef08-2763-4625-8614-879f04ceb032\") " pod="openshift-infra/auto-csr-approver-29536982-n8h8w" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.422416 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.486671 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g87q"] Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.486773 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" Feb 27 19:02:00 crc kubenswrapper[4708]: W0227 19:02:00.976483 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4450ef08_2763_4625_8614_879f04ceb032.slice/crio-5af90033ea631e17cd2d2ee90f9c0e92fae7dd0b6a6c1189d77d6a339144a96d WatchSource:0}: Error finding container 5af90033ea631e17cd2d2ee90f9c0e92fae7dd0b6a6c1189d77d6a339144a96d: Status 404 returned error can't find the container with id 5af90033ea631e17cd2d2ee90f9c0e92fae7dd0b6a6c1189d77d6a339144a96d Feb 27 19:02:00 crc kubenswrapper[4708]: I0227 19:02:00.976542 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-n8h8w"] Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.023681 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" event={"ID":"4450ef08-2763-4625-8614-879f04ceb032","Type":"ContainerStarted","Data":"5af90033ea631e17cd2d2ee90f9c0e92fae7dd0b6a6c1189d77d6a339144a96d"} Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.024151 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4g87q" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="registry-server" containerID="cri-o://67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c" gracePeriod=2 Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.578321 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.611574 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-utilities\") pod \"61e761be-74e5-4dbb-b49b-15f597efc0e2\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.611982 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-catalog-content\") pod \"61e761be-74e5-4dbb-b49b-15f597efc0e2\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.612115 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9mcps\" (UniqueName: \"kubernetes.io/projected/61e761be-74e5-4dbb-b49b-15f597efc0e2-kube-api-access-9mcps\") pod \"61e761be-74e5-4dbb-b49b-15f597efc0e2\" (UID: \"61e761be-74e5-4dbb-b49b-15f597efc0e2\") " Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.612885 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-utilities" (OuterVolumeSpecName: "utilities") pod "61e761be-74e5-4dbb-b49b-15f597efc0e2" (UID: "61e761be-74e5-4dbb-b49b-15f597efc0e2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.619096 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61e761be-74e5-4dbb-b49b-15f597efc0e2-kube-api-access-9mcps" (OuterVolumeSpecName: "kube-api-access-9mcps") pod "61e761be-74e5-4dbb-b49b-15f597efc0e2" (UID: "61e761be-74e5-4dbb-b49b-15f597efc0e2"). InnerVolumeSpecName "kube-api-access-9mcps". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.682187 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61e761be-74e5-4dbb-b49b-15f597efc0e2" (UID: "61e761be-74e5-4dbb-b49b-15f597efc0e2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.715883 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.715952 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9mcps\" (UniqueName: \"kubernetes.io/projected/61e761be-74e5-4dbb-b49b-15f597efc0e2-kube-api-access-9mcps\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:01 crc kubenswrapper[4708]: I0227 19:02:01.715996 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61e761be-74e5-4dbb-b49b-15f597efc0e2-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.038320 4708 generic.go:334] "Generic (PLEG): container finished" podID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerID="67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c" exitCode=0 Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.038376 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4g87q" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.038415 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g87q" event={"ID":"61e761be-74e5-4dbb-b49b-15f597efc0e2","Type":"ContainerDied","Data":"67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c"} Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.038777 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4g87q" event={"ID":"61e761be-74e5-4dbb-b49b-15f597efc0e2","Type":"ContainerDied","Data":"16c161676e77eafcec6a20b35620f3ba40f8e7abd26f61e476cf751ec954284a"} Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.038808 4708 scope.go:117] "RemoveContainer" containerID="67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.076809 4708 scope.go:117] "RemoveContainer" containerID="f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.089588 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4g87q"] Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.098758 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4g87q"] Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.100219 4708 scope.go:117] "RemoveContainer" containerID="fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.152678 4708 scope.go:117] "RemoveContainer" containerID="67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c" Feb 27 19:02:02 crc kubenswrapper[4708]: E0227 19:02:02.153560 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c\": container with ID starting with 67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c not found: ID does not exist" containerID="67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.153590 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c"} err="failed to get container status \"67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c\": rpc error: code = NotFound desc = could not find container \"67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c\": container with ID starting with 67963f162959226b0f983343f90ae4999a869542a33a8bd79c462774c0b1051c not found: ID does not exist" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.153606 4708 scope.go:117] "RemoveContainer" containerID="f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498" Feb 27 19:02:02 crc kubenswrapper[4708]: E0227 19:02:02.153862 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498\": container with ID starting with f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498 not found: ID does not exist" containerID="f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.153885 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498"} err="failed to get container status \"f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498\": rpc error: code = NotFound desc = could not find container \"f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498\": container with ID starting with f79297d1b0c8822738fbcfb33363a6d45230081ccb2ab29e8644166086e81498 not found: ID does not exist" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.153899 4708 scope.go:117] "RemoveContainer" containerID="fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38" Feb 27 19:02:02 crc kubenswrapper[4708]: E0227 19:02:02.154116 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38\": container with ID starting with fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38 not found: ID does not exist" containerID="fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.154136 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38"} err="failed to get container status \"fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38\": rpc error: code = NotFound desc = could not find container \"fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38\": container with ID starting with fdbd510ab68ed7069ee9ef2c9f840d07e0089315537ebae05b6294f6d7c5fe38 not found: ID does not exist" Feb 27 19:02:02 crc kubenswrapper[4708]: I0227 19:02:02.240904 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" path="/var/lib/kubelet/pods/61e761be-74e5-4dbb-b49b-15f597efc0e2/volumes" Feb 27 19:02:03 crc kubenswrapper[4708]: I0227 19:02:03.053129 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" event={"ID":"4450ef08-2763-4625-8614-879f04ceb032","Type":"ContainerStarted","Data":"e0c7ed73f61d69e0050a2d6fc8f14907d03c5b4ae4480b788a0dd3acf9bc7b87"} Feb 27 19:02:03 crc kubenswrapper[4708]: I0227 19:02:03.065719 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" podStartSLOduration=1.521396131 podStartE2EDuration="3.065698112s" podCreationTimestamp="2026-02-27 19:02:00 +0000 UTC" firstStartedPulling="2026-02-27 19:02:00.983884657 +0000 UTC m=+7719.499682254" lastFinishedPulling="2026-02-27 19:02:02.528186638 +0000 UTC m=+7721.043984235" observedRunningTime="2026-02-27 19:02:03.064978122 +0000 UTC m=+7721.580775719" watchObservedRunningTime="2026-02-27 19:02:03.065698112 +0000 UTC m=+7721.581495709" Feb 27 19:02:04 crc kubenswrapper[4708]: I0227 19:02:04.064796 4708 generic.go:334] "Generic (PLEG): container finished" podID="4450ef08-2763-4625-8614-879f04ceb032" containerID="e0c7ed73f61d69e0050a2d6fc8f14907d03c5b4ae4480b788a0dd3acf9bc7b87" exitCode=0 Feb 27 19:02:04 crc kubenswrapper[4708]: I0227 19:02:04.064906 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" event={"ID":"4450ef08-2763-4625-8614-879f04ceb032","Type":"ContainerDied","Data":"e0c7ed73f61d69e0050a2d6fc8f14907d03c5b4ae4480b788a0dd3acf9bc7b87"} Feb 27 19:02:05 crc kubenswrapper[4708]: I0227 19:02:05.550711 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" Feb 27 19:02:05 crc kubenswrapper[4708]: I0227 19:02:05.598361 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5jwz\" (UniqueName: \"kubernetes.io/projected/4450ef08-2763-4625-8614-879f04ceb032-kube-api-access-v5jwz\") pod \"4450ef08-2763-4625-8614-879f04ceb032\" (UID: \"4450ef08-2763-4625-8614-879f04ceb032\") " Feb 27 19:02:05 crc kubenswrapper[4708]: I0227 19:02:05.610702 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4450ef08-2763-4625-8614-879f04ceb032-kube-api-access-v5jwz" (OuterVolumeSpecName: "kube-api-access-v5jwz") pod "4450ef08-2763-4625-8614-879f04ceb032" (UID: "4450ef08-2763-4625-8614-879f04ceb032"). InnerVolumeSpecName "kube-api-access-v5jwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:02:05 crc kubenswrapper[4708]: I0227 19:02:05.631962 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:02:05 crc kubenswrapper[4708]: I0227 19:02:05.632034 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:02:05 crc kubenswrapper[4708]: I0227 19:02:05.701822 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5jwz\" (UniqueName: \"kubernetes.io/projected/4450ef08-2763-4625-8614-879f04ceb032-kube-api-access-v5jwz\") on node \"crc\" DevicePath \"\"" Feb 27 19:02:06 crc kubenswrapper[4708]: I0227 19:02:06.090792 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" event={"ID":"4450ef08-2763-4625-8614-879f04ceb032","Type":"ContainerDied","Data":"5af90033ea631e17cd2d2ee90f9c0e92fae7dd0b6a6c1189d77d6a339144a96d"} Feb 27 19:02:06 crc kubenswrapper[4708]: I0227 19:02:06.090838 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5af90033ea631e17cd2d2ee90f9c0e92fae7dd0b6a6c1189d77d6a339144a96d" Feb 27 19:02:06 crc kubenswrapper[4708]: I0227 19:02:06.090902 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536982-n8h8w" Feb 27 19:02:06 crc kubenswrapper[4708]: I0227 19:02:06.641060 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-drwj9"] Feb 27 19:02:06 crc kubenswrapper[4708]: I0227 19:02:06.659158 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536976-drwj9"] Feb 27 19:02:08 crc kubenswrapper[4708]: I0227 19:02:08.248097 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a608316-a793-460a-b4ac-e7cdba1275ed" path="/var/lib/kubelet/pods/9a608316-a793-460a-b4ac-e7cdba1275ed/volumes" Feb 27 19:02:08 crc kubenswrapper[4708]: I0227 19:02:08.395146 4708 scope.go:117] "RemoveContainer" containerID="82fd2f55f42a6d85099d2f6679af3396fec3a3108b5f01068c9404ece3021f74" Feb 27 19:02:35 crc kubenswrapper[4708]: I0227 19:02:35.631719 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:02:35 crc kubenswrapper[4708]: I0227 19:02:35.632416 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:03:05 crc kubenswrapper[4708]: I0227 19:03:05.631196 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:03:05 crc kubenswrapper[4708]: I0227 19:03:05.631737 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:03:05 crc kubenswrapper[4708]: I0227 19:03:05.631798 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:03:05 crc kubenswrapper[4708]: I0227 19:03:05.632953 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"995c06fab749458d7934e418b18338358959b15d6a8dcdc365c46a871a147a79"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:03:05 crc kubenswrapper[4708]: I0227 19:03:05.633058 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://995c06fab749458d7934e418b18338358959b15d6a8dcdc365c46a871a147a79" gracePeriod=600 Feb 27 19:03:06 crc kubenswrapper[4708]: I0227 19:03:06.835729 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="995c06fab749458d7934e418b18338358959b15d6a8dcdc365c46a871a147a79" exitCode=0 Feb 27 19:03:06 crc kubenswrapper[4708]: I0227 19:03:06.835842 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"995c06fab749458d7934e418b18338358959b15d6a8dcdc365c46a871a147a79"} Feb 27 19:03:06 crc kubenswrapper[4708]: I0227 19:03:06.836119 4708 scope.go:117] "RemoveContainer" containerID="505b70f7eea1e0f6690da3dbd9d082c369c603b78b25ab44fc66f612df541cb3" Feb 27 19:03:07 crc kubenswrapper[4708]: I0227 19:03:07.848886 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace"} Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.159295 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536984-78mrn"] Feb 27 19:04:00 crc kubenswrapper[4708]: E0227 19:04:00.160410 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="extract-content" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.160427 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="extract-content" Feb 27 19:04:00 crc kubenswrapper[4708]: E0227 19:04:00.160462 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="registry-server" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.160471 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="registry-server" Feb 27 19:04:00 crc kubenswrapper[4708]: E0227 19:04:00.160508 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4450ef08-2763-4625-8614-879f04ceb032" containerName="oc" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.160515 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4450ef08-2763-4625-8614-879f04ceb032" containerName="oc" Feb 27 19:04:00 crc kubenswrapper[4708]: E0227 19:04:00.160534 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="extract-utilities" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.160542 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="extract-utilities" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.160766 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="61e761be-74e5-4dbb-b49b-15f597efc0e2" containerName="registry-server" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.160797 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4450ef08-2763-4625-8614-879f04ceb032" containerName="oc" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.161776 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-78mrn" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.164463 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.164664 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.165003 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.172417 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-78mrn"] Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.269021 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj9lm\" (UniqueName: \"kubernetes.io/projected/0e1b7922-e103-4699-bfff-094c8a63fe68-kube-api-access-cj9lm\") pod \"auto-csr-approver-29536984-78mrn\" (UID: \"0e1b7922-e103-4699-bfff-094c8a63fe68\") " pod="openshift-infra/auto-csr-approver-29536984-78mrn" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.371437 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cj9lm\" (UniqueName: \"kubernetes.io/projected/0e1b7922-e103-4699-bfff-094c8a63fe68-kube-api-access-cj9lm\") pod \"auto-csr-approver-29536984-78mrn\" (UID: \"0e1b7922-e103-4699-bfff-094c8a63fe68\") " pod="openshift-infra/auto-csr-approver-29536984-78mrn" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.395913 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cj9lm\" (UniqueName: \"kubernetes.io/projected/0e1b7922-e103-4699-bfff-094c8a63fe68-kube-api-access-cj9lm\") pod \"auto-csr-approver-29536984-78mrn\" (UID: \"0e1b7922-e103-4699-bfff-094c8a63fe68\") " pod="openshift-infra/auto-csr-approver-29536984-78mrn" Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.481492 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-78mrn" Feb 27 19:04:00 crc kubenswrapper[4708]: W0227 19:04:00.972828 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0e1b7922_e103_4699_bfff_094c8a63fe68.slice/crio-7cf3b4155c8e8e189c83d125ef7c63642e162e30b786ebcaf4110de4895f49ee WatchSource:0}: Error finding container 7cf3b4155c8e8e189c83d125ef7c63642e162e30b786ebcaf4110de4895f49ee: Status 404 returned error can't find the container with id 7cf3b4155c8e8e189c83d125ef7c63642e162e30b786ebcaf4110de4895f49ee Feb 27 19:04:00 crc kubenswrapper[4708]: I0227 19:04:00.973012 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-78mrn"] Feb 27 19:04:01 crc kubenswrapper[4708]: I0227 19:04:01.687602 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536984-78mrn" event={"ID":"0e1b7922-e103-4699-bfff-094c8a63fe68","Type":"ContainerStarted","Data":"7cf3b4155c8e8e189c83d125ef7c63642e162e30b786ebcaf4110de4895f49ee"} Feb 27 19:04:04 crc kubenswrapper[4708]: I0227 19:04:04.715566 4708 generic.go:334] "Generic (PLEG): container finished" podID="0e1b7922-e103-4699-bfff-094c8a63fe68" containerID="6d044d215c7f13d1c3c85aa89fd0581fa5034a0ea231dc47748b3d6c9b1c0c99" exitCode=0 Feb 27 19:04:04 crc kubenswrapper[4708]: I0227 19:04:04.716127 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536984-78mrn" event={"ID":"0e1b7922-e103-4699-bfff-094c8a63fe68","Type":"ContainerDied","Data":"6d044d215c7f13d1c3c85aa89fd0581fa5034a0ea231dc47748b3d6c9b1c0c99"} Feb 27 19:04:06 crc kubenswrapper[4708]: I0227 19:04:06.212393 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-78mrn" Feb 27 19:04:06 crc kubenswrapper[4708]: I0227 19:04:06.332208 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj9lm\" (UniqueName: \"kubernetes.io/projected/0e1b7922-e103-4699-bfff-094c8a63fe68-kube-api-access-cj9lm\") pod \"0e1b7922-e103-4699-bfff-094c8a63fe68\" (UID: \"0e1b7922-e103-4699-bfff-094c8a63fe68\") " Feb 27 19:04:06 crc kubenswrapper[4708]: I0227 19:04:06.340937 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e1b7922-e103-4699-bfff-094c8a63fe68-kube-api-access-cj9lm" (OuterVolumeSpecName: "kube-api-access-cj9lm") pod "0e1b7922-e103-4699-bfff-094c8a63fe68" (UID: "0e1b7922-e103-4699-bfff-094c8a63fe68"). InnerVolumeSpecName "kube-api-access-cj9lm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:04:06 crc kubenswrapper[4708]: I0227 19:04:06.437441 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cj9lm\" (UniqueName: \"kubernetes.io/projected/0e1b7922-e103-4699-bfff-094c8a63fe68-kube-api-access-cj9lm\") on node \"crc\" DevicePath \"\"" Feb 27 19:04:06 crc kubenswrapper[4708]: I0227 19:04:06.737758 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536984-78mrn" event={"ID":"0e1b7922-e103-4699-bfff-094c8a63fe68","Type":"ContainerDied","Data":"7cf3b4155c8e8e189c83d125ef7c63642e162e30b786ebcaf4110de4895f49ee"} Feb 27 19:04:06 crc kubenswrapper[4708]: I0227 19:04:06.737802 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536984-78mrn" Feb 27 19:04:06 crc kubenswrapper[4708]: I0227 19:04:06.737812 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cf3b4155c8e8e189c83d125ef7c63642e162e30b786ebcaf4110de4895f49ee" Feb 27 19:04:07 crc kubenswrapper[4708]: I0227 19:04:07.275484 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-4cmdp"] Feb 27 19:04:07 crc kubenswrapper[4708]: I0227 19:04:07.286442 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536978-4cmdp"] Feb 27 19:04:08 crc kubenswrapper[4708]: I0227 19:04:08.241722 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eca96e98-47ad-4b0c-a231-f954ef746657" path="/var/lib/kubelet/pods/eca96e98-47ad-4b0c-a231-f954ef746657/volumes" Feb 27 19:04:08 crc kubenswrapper[4708]: I0227 19:04:08.564247 4708 scope.go:117] "RemoveContainer" containerID="d5a7b90b3093ee0e33433ca62f74574d69bb51259791a179520e53d934ccb3e2" Feb 27 19:05:35 crc kubenswrapper[4708]: I0227 19:05:35.631183 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:05:35 crc kubenswrapper[4708]: I0227 19:05:35.632014 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.448463 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmpt"] Feb 27 19:05:59 crc kubenswrapper[4708]: E0227 19:05:59.449586 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0e1b7922-e103-4699-bfff-094c8a63fe68" containerName="oc" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.449602 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e1b7922-e103-4699-bfff-094c8a63fe68" containerName="oc" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.449873 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0e1b7922-e103-4699-bfff-094c8a63fe68" containerName="oc" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.451653 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.466040 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmpt"] Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.572341 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-utilities\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.572405 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qjcb6\" (UniqueName: \"kubernetes.io/projected/4587c99f-42f2-4098-94aa-c77b6f2a230a-kube-api-access-qjcb6\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.572510 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-catalog-content\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.674864 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-utilities\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.674919 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qjcb6\" (UniqueName: \"kubernetes.io/projected/4587c99f-42f2-4098-94aa-c77b6f2a230a-kube-api-access-qjcb6\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.674989 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-catalog-content\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.675416 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-utilities\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.675460 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-catalog-content\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.695734 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qjcb6\" (UniqueName: \"kubernetes.io/projected/4587c99f-42f2-4098-94aa-c77b6f2a230a-kube-api-access-qjcb6\") pod \"redhat-marketplace-xbmpt\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:05:59 crc kubenswrapper[4708]: I0227 19:05:59.779485 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.144344 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536986-wg4ml"] Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.146351 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.194859 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.195302 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.195525 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.220480 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-wg4ml"] Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.258388 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmpt"] Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.294919 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkpdb\" (UniqueName: \"kubernetes.io/projected/07c26e19-8fab-4604-bf17-c7d15e3c05e0-kube-api-access-rkpdb\") pod \"auto-csr-approver-29536986-wg4ml\" (UID: \"07c26e19-8fab-4604-bf17-c7d15e3c05e0\") " pod="openshift-infra/auto-csr-approver-29536986-wg4ml" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.396774 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkpdb\" (UniqueName: \"kubernetes.io/projected/07c26e19-8fab-4604-bf17-c7d15e3c05e0-kube-api-access-rkpdb\") pod \"auto-csr-approver-29536986-wg4ml\" (UID: \"07c26e19-8fab-4604-bf17-c7d15e3c05e0\") " pod="openshift-infra/auto-csr-approver-29536986-wg4ml" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.414764 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkpdb\" (UniqueName: \"kubernetes.io/projected/07c26e19-8fab-4604-bf17-c7d15e3c05e0-kube-api-access-rkpdb\") pod \"auto-csr-approver-29536986-wg4ml\" (UID: \"07c26e19-8fab-4604-bf17-c7d15e3c05e0\") " pod="openshift-infra/auto-csr-approver-29536986-wg4ml" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.527881 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.979666 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-wg4ml"] Feb 27 19:06:00 crc kubenswrapper[4708]: W0227 19:06:00.985591 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod07c26e19_8fab_4604_bf17_c7d15e3c05e0.slice/crio-ecd8d7b48b85a4e56f9f8245980daaa999d9e5e08ce367c7955047a96abeb00a WatchSource:0}: Error finding container ecd8d7b48b85a4e56f9f8245980daaa999d9e5e08ce367c7955047a96abeb00a: Status 404 returned error can't find the container with id ecd8d7b48b85a4e56f9f8245980daaa999d9e5e08ce367c7955047a96abeb00a Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.996518 4708 generic.go:334] "Generic (PLEG): container finished" podID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerID="f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171" exitCode=0 Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.996586 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmpt" event={"ID":"4587c99f-42f2-4098-94aa-c77b6f2a230a","Type":"ContainerDied","Data":"f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171"} Feb 27 19:06:00 crc kubenswrapper[4708]: I0227 19:06:00.996626 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmpt" event={"ID":"4587c99f-42f2-4098-94aa-c77b6f2a230a","Type":"ContainerStarted","Data":"a1e151fa14825d9e8ef09960d5b9b1e0297abc2f87967ef4b07999007dbd1129"} Feb 27 19:06:02 crc kubenswrapper[4708]: I0227 19:06:02.010146 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" event={"ID":"07c26e19-8fab-4604-bf17-c7d15e3c05e0","Type":"ContainerStarted","Data":"ecd8d7b48b85a4e56f9f8245980daaa999d9e5e08ce367c7955047a96abeb00a"} Feb 27 19:06:03 crc kubenswrapper[4708]: I0227 19:06:03.023381 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" event={"ID":"07c26e19-8fab-4604-bf17-c7d15e3c05e0","Type":"ContainerStarted","Data":"7f19bb251e501ded17cf1284dbecdbbc1a58afa18ce6d6d826ca28a1be6a2182"} Feb 27 19:06:03 crc kubenswrapper[4708]: I0227 19:06:03.025042 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmpt" event={"ID":"4587c99f-42f2-4098-94aa-c77b6f2a230a","Type":"ContainerStarted","Data":"50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7"} Feb 27 19:06:03 crc kubenswrapper[4708]: I0227 19:06:03.051056 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" podStartSLOduration=1.387764626 podStartE2EDuration="3.05103563s" podCreationTimestamp="2026-02-27 19:06:00 +0000 UTC" firstStartedPulling="2026-02-27 19:06:00.988076237 +0000 UTC m=+7959.503873844" lastFinishedPulling="2026-02-27 19:06:02.651347261 +0000 UTC m=+7961.167144848" observedRunningTime="2026-02-27 19:06:03.045585086 +0000 UTC m=+7961.561382693" watchObservedRunningTime="2026-02-27 19:06:03.05103563 +0000 UTC m=+7961.566833217" Feb 27 19:06:04 crc kubenswrapper[4708]: I0227 19:06:04.038577 4708 generic.go:334] "Generic (PLEG): container finished" podID="07c26e19-8fab-4604-bf17-c7d15e3c05e0" containerID="7f19bb251e501ded17cf1284dbecdbbc1a58afa18ce6d6d826ca28a1be6a2182" exitCode=0 Feb 27 19:06:04 crc kubenswrapper[4708]: I0227 19:06:04.038692 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" event={"ID":"07c26e19-8fab-4604-bf17-c7d15e3c05e0","Type":"ContainerDied","Data":"7f19bb251e501ded17cf1284dbecdbbc1a58afa18ce6d6d826ca28a1be6a2182"} Feb 27 19:06:04 crc kubenswrapper[4708]: I0227 19:06:04.042204 4708 generic.go:334] "Generic (PLEG): container finished" podID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerID="50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7" exitCode=0 Feb 27 19:06:04 crc kubenswrapper[4708]: I0227 19:06:04.042321 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmpt" event={"ID":"4587c99f-42f2-4098-94aa-c77b6f2a230a","Type":"ContainerDied","Data":"50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7"} Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.054826 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmpt" event={"ID":"4587c99f-42f2-4098-94aa-c77b6f2a230a","Type":"ContainerStarted","Data":"4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f"} Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.075751 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-xbmpt" podStartSLOduration=2.57940925 podStartE2EDuration="6.07304313s" podCreationTimestamp="2026-02-27 19:05:59 +0000 UTC" firstStartedPulling="2026-02-27 19:06:00.999038826 +0000 UTC m=+7959.514836453" lastFinishedPulling="2026-02-27 19:06:04.492672716 +0000 UTC m=+7963.008470333" observedRunningTime="2026-02-27 19:06:05.071884107 +0000 UTC m=+7963.587681694" watchObservedRunningTime="2026-02-27 19:06:05.07304313 +0000 UTC m=+7963.588840717" Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.517108 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.623959 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkpdb\" (UniqueName: \"kubernetes.io/projected/07c26e19-8fab-4604-bf17-c7d15e3c05e0-kube-api-access-rkpdb\") pod \"07c26e19-8fab-4604-bf17-c7d15e3c05e0\" (UID: \"07c26e19-8fab-4604-bf17-c7d15e3c05e0\") " Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.631735 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.631975 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.634076 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07c26e19-8fab-4604-bf17-c7d15e3c05e0-kube-api-access-rkpdb" (OuterVolumeSpecName: "kube-api-access-rkpdb") pod "07c26e19-8fab-4604-bf17-c7d15e3c05e0" (UID: "07c26e19-8fab-4604-bf17-c7d15e3c05e0"). InnerVolumeSpecName "kube-api-access-rkpdb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:06:05 crc kubenswrapper[4708]: I0227 19:06:05.726744 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkpdb\" (UniqueName: \"kubernetes.io/projected/07c26e19-8fab-4604-bf17-c7d15e3c05e0-kube-api-access-rkpdb\") on node \"crc\" DevicePath \"\"" Feb 27 19:06:06 crc kubenswrapper[4708]: I0227 19:06:06.064406 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" Feb 27 19:06:06 crc kubenswrapper[4708]: I0227 19:06:06.064432 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536986-wg4ml" event={"ID":"07c26e19-8fab-4604-bf17-c7d15e3c05e0","Type":"ContainerDied","Data":"ecd8d7b48b85a4e56f9f8245980daaa999d9e5e08ce367c7955047a96abeb00a"} Feb 27 19:06:06 crc kubenswrapper[4708]: I0227 19:06:06.064498 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecd8d7b48b85a4e56f9f8245980daaa999d9e5e08ce367c7955047a96abeb00a" Feb 27 19:06:06 crc kubenswrapper[4708]: I0227 19:06:06.593607 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-pxjgn"] Feb 27 19:06:06 crc kubenswrapper[4708]: I0227 19:06:06.615232 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536980-pxjgn"] Feb 27 19:06:08 crc kubenswrapper[4708]: I0227 19:06:08.241700 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2832e08-72ae-4762-bd42-682a07d95f26" path="/var/lib/kubelet/pods/f2832e08-72ae-4762-bd42-682a07d95f26/volumes" Feb 27 19:06:09 crc kubenswrapper[4708]: I0227 19:06:09.780572 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:06:09 crc kubenswrapper[4708]: I0227 19:06:09.781344 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:06:09 crc kubenswrapper[4708]: I0227 19:06:09.842792 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:06:10 crc kubenswrapper[4708]: I0227 19:06:10.167029 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:06:10 crc kubenswrapper[4708]: I0227 19:06:10.221605 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmpt"] Feb 27 19:06:12 crc kubenswrapper[4708]: I0227 19:06:12.147827 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-xbmpt" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="registry-server" containerID="cri-o://4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f" gracePeriod=2 Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.758674 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.887955 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qjcb6\" (UniqueName: \"kubernetes.io/projected/4587c99f-42f2-4098-94aa-c77b6f2a230a-kube-api-access-qjcb6\") pod \"4587c99f-42f2-4098-94aa-c77b6f2a230a\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.888005 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-catalog-content\") pod \"4587c99f-42f2-4098-94aa-c77b6f2a230a\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.888111 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-utilities\") pod \"4587c99f-42f2-4098-94aa-c77b6f2a230a\" (UID: \"4587c99f-42f2-4098-94aa-c77b6f2a230a\") " Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.889163 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-utilities" (OuterVolumeSpecName: "utilities") pod "4587c99f-42f2-4098-94aa-c77b6f2a230a" (UID: "4587c99f-42f2-4098-94aa-c77b6f2a230a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.901947 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4587c99f-42f2-4098-94aa-c77b6f2a230a-kube-api-access-qjcb6" (OuterVolumeSpecName: "kube-api-access-qjcb6") pod "4587c99f-42f2-4098-94aa-c77b6f2a230a" (UID: "4587c99f-42f2-4098-94aa-c77b6f2a230a"). InnerVolumeSpecName "kube-api-access-qjcb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.929218 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4587c99f-42f2-4098-94aa-c77b6f2a230a" (UID: "4587c99f-42f2-4098-94aa-c77b6f2a230a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.990341 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.990370 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qjcb6\" (UniqueName: \"kubernetes.io/projected/4587c99f-42f2-4098-94aa-c77b6f2a230a-kube-api-access-qjcb6\") on node \"crc\" DevicePath \"\"" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:12.990383 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4587c99f-42f2-4098-94aa-c77b6f2a230a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.161725 4708 generic.go:334] "Generic (PLEG): container finished" podID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerID="4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f" exitCode=0 Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.161956 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmpt" event={"ID":"4587c99f-42f2-4098-94aa-c77b6f2a230a","Type":"ContainerDied","Data":"4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f"} Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.162078 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-xbmpt" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.162092 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-xbmpt" event={"ID":"4587c99f-42f2-4098-94aa-c77b6f2a230a","Type":"ContainerDied","Data":"a1e151fa14825d9e8ef09960d5b9b1e0297abc2f87967ef4b07999007dbd1129"} Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.162121 4708 scope.go:117] "RemoveContainer" containerID="4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.197472 4708 scope.go:117] "RemoveContainer" containerID="50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.216188 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmpt"] Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.223768 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-xbmpt"] Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.235861 4708 scope.go:117] "RemoveContainer" containerID="f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.264935 4708 scope.go:117] "RemoveContainer" containerID="4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f" Feb 27 19:06:13 crc kubenswrapper[4708]: E0227 19:06:13.265403 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f\": container with ID starting with 4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f not found: ID does not exist" containerID="4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.265436 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f"} err="failed to get container status \"4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f\": rpc error: code = NotFound desc = could not find container \"4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f\": container with ID starting with 4af3a2655a24454f97ca6d30749ca7f6ff0bea9168ae702b45ce004d4a459e1f not found: ID does not exist" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.265460 4708 scope.go:117] "RemoveContainer" containerID="50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7" Feb 27 19:06:13 crc kubenswrapper[4708]: E0227 19:06:13.265815 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7\": container with ID starting with 50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7 not found: ID does not exist" containerID="50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.265905 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7"} err="failed to get container status \"50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7\": rpc error: code = NotFound desc = could not find container \"50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7\": container with ID starting with 50014eb45c42f0351df292e5964132c8e63fb94e3d5d3a8bc8a92880516ac8c7 not found: ID does not exist" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.265949 4708 scope.go:117] "RemoveContainer" containerID="f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171" Feb 27 19:06:13 crc kubenswrapper[4708]: E0227 19:06:13.266413 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171\": container with ID starting with f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171 not found: ID does not exist" containerID="f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171" Feb 27 19:06:13 crc kubenswrapper[4708]: I0227 19:06:13.266447 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171"} err="failed to get container status \"f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171\": rpc error: code = NotFound desc = could not find container \"f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171\": container with ID starting with f151c98210102ebfaca991a2c006b04a41ac0d3e26bc1f4751d71055dd7d3171 not found: ID does not exist" Feb 27 19:06:14 crc kubenswrapper[4708]: I0227 19:06:14.246163 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" path="/var/lib/kubelet/pods/4587c99f-42f2-4098-94aa-c77b6f2a230a/volumes" Feb 27 19:06:35 crc kubenswrapper[4708]: I0227 19:06:35.632021 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:06:35 crc kubenswrapper[4708]: I0227 19:06:35.632637 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:06:35 crc kubenswrapper[4708]: I0227 19:06:35.632690 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:06:35 crc kubenswrapper[4708]: I0227 19:06:35.633622 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:06:35 crc kubenswrapper[4708]: I0227 19:06:35.633729 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" gracePeriod=600 Feb 27 19:06:35 crc kubenswrapper[4708]: E0227 19:06:35.772058 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:06:36 crc kubenswrapper[4708]: I0227 19:06:36.688465 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" exitCode=0 Feb 27 19:06:36 crc kubenswrapper[4708]: I0227 19:06:36.688519 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace"} Feb 27 19:06:36 crc kubenswrapper[4708]: I0227 19:06:36.688582 4708 scope.go:117] "RemoveContainer" containerID="995c06fab749458d7934e418b18338358959b15d6a8dcdc365c46a871a147a79" Feb 27 19:06:36 crc kubenswrapper[4708]: I0227 19:06:36.690004 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:06:36 crc kubenswrapper[4708]: E0227 19:06:36.690540 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:06:49 crc kubenswrapper[4708]: I0227 19:06:49.231918 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:06:49 crc kubenswrapper[4708]: E0227 19:06:49.232702 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:07:04 crc kubenswrapper[4708]: I0227 19:07:04.228987 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:07:04 crc kubenswrapper[4708]: E0227 19:07:04.229724 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:07:08 crc kubenswrapper[4708]: I0227 19:07:08.720020 4708 scope.go:117] "RemoveContainer" containerID="904688ceeb11365d0f852b98f96c827f0bf7d5915574d913bde3f0f15337fa5a" Feb 27 19:07:17 crc kubenswrapper[4708]: I0227 19:07:17.228412 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:07:17 crc kubenswrapper[4708]: E0227 19:07:17.229548 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:07:32 crc kubenswrapper[4708]: I0227 19:07:32.156995 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6cffdcc987-z48fb" podUID="6e9387a8-c996-4095-8d52-d73b5d6d1d7e" containerName="proxy-server" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 27 19:07:32 crc kubenswrapper[4708]: I0227 19:07:32.237587 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:07:32 crc kubenswrapper[4708]: E0227 19:07:32.237971 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.344762 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qtdfb"] Feb 27 19:07:39 crc kubenswrapper[4708]: E0227 19:07:39.345732 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="extract-utilities" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.345744 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="extract-utilities" Feb 27 19:07:39 crc kubenswrapper[4708]: E0227 19:07:39.345760 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="registry-server" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.345767 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="registry-server" Feb 27 19:07:39 crc kubenswrapper[4708]: E0227 19:07:39.345779 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="extract-content" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.345786 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="extract-content" Feb 27 19:07:39 crc kubenswrapper[4708]: E0227 19:07:39.345818 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07c26e19-8fab-4604-bf17-c7d15e3c05e0" containerName="oc" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.345825 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="07c26e19-8fab-4604-bf17-c7d15e3c05e0" containerName="oc" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.346059 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="07c26e19-8fab-4604-bf17-c7d15e3c05e0" containerName="oc" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.346074 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4587c99f-42f2-4098-94aa-c77b6f2a230a" containerName="registry-server" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.347639 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.853065 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtdfb"] Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.863133 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-utilities\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.863185 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-catalog-content\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.863337 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fv6h\" (UniqueName: \"kubernetes.io/projected/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-kube-api-access-4fv6h\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.966262 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-utilities\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.966312 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-catalog-content\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.966472 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4fv6h\" (UniqueName: \"kubernetes.io/projected/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-kube-api-access-4fv6h\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.967177 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-catalog-content\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.967178 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-utilities\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:39 crc kubenswrapper[4708]: I0227 19:07:39.983653 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fv6h\" (UniqueName: \"kubernetes.io/projected/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-kube-api-access-4fv6h\") pod \"redhat-operators-qtdfb\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:40 crc kubenswrapper[4708]: I0227 19:07:40.127568 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:07:40 crc kubenswrapper[4708]: I0227 19:07:40.590869 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qtdfb"] Feb 27 19:07:40 crc kubenswrapper[4708]: I0227 19:07:40.817083 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdfb" event={"ID":"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e","Type":"ContainerStarted","Data":"7a6e2ecfa4ca17ccebaa1c88f6e5d164bf055298c972d00764a5b15420f216f9"} Feb 27 19:07:41 crc kubenswrapper[4708]: I0227 19:07:41.827457 4708 generic.go:334] "Generic (PLEG): container finished" podID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerID="1723e54116ba9886f3999c449527aa59e3c99a53738695aa8409c65985fb38d8" exitCode=0 Feb 27 19:07:41 crc kubenswrapper[4708]: I0227 19:07:41.827560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdfb" event={"ID":"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e","Type":"ContainerDied","Data":"1723e54116ba9886f3999c449527aa59e3c99a53738695aa8409c65985fb38d8"} Feb 27 19:07:41 crc kubenswrapper[4708]: I0227 19:07:41.830688 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:07:47 crc kubenswrapper[4708]: I0227 19:07:47.228289 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:07:47 crc kubenswrapper[4708]: E0227 19:07:47.229121 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:07:47 crc kubenswrapper[4708]: I0227 19:07:47.231117 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdfb" event={"ID":"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e","Type":"ContainerStarted","Data":"67b4461d43e355a1d22a38785f4408eac04f09d242009f3a3e7aba6f5383ab85"} Feb 27 19:07:50 crc kubenswrapper[4708]: I0227 19:07:50.267031 4708 generic.go:334] "Generic (PLEG): container finished" podID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerID="67b4461d43e355a1d22a38785f4408eac04f09d242009f3a3e7aba6f5383ab85" exitCode=0 Feb 27 19:07:50 crc kubenswrapper[4708]: I0227 19:07:50.267115 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdfb" event={"ID":"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e","Type":"ContainerDied","Data":"67b4461d43e355a1d22a38785f4408eac04f09d242009f3a3e7aba6f5383ab85"} Feb 27 19:07:51 crc kubenswrapper[4708]: I0227 19:07:51.280154 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdfb" event={"ID":"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e","Type":"ContainerStarted","Data":"48924da55393ec72fd1dde576b24d2f079286643a2900a660cf0ac9690b15aa8"} Feb 27 19:07:51 crc kubenswrapper[4708]: I0227 19:07:51.313418 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qtdfb" podStartSLOduration=3.249288224 podStartE2EDuration="12.313393811s" podCreationTimestamp="2026-02-27 19:07:39 +0000 UTC" firstStartedPulling="2026-02-27 19:07:41.830414794 +0000 UTC m=+8060.346212391" lastFinishedPulling="2026-02-27 19:07:50.894520351 +0000 UTC m=+8069.410317978" observedRunningTime="2026-02-27 19:07:51.310828829 +0000 UTC m=+8069.826626446" watchObservedRunningTime="2026-02-27 19:07:51.313393811 +0000 UTC m=+8069.829191408" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.128264 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.128949 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.146429 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536988-5wsq8"] Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.148828 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.151031 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.151150 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.151049 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.169977 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-5wsq8"] Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.225975 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jzghj\" (UniqueName: \"kubernetes.io/projected/585f7d34-98bb-47d7-8a8a-6debb23e6a3c-kube-api-access-jzghj\") pod \"auto-csr-approver-29536988-5wsq8\" (UID: \"585f7d34-98bb-47d7-8a8a-6debb23e6a3c\") " pod="openshift-infra/auto-csr-approver-29536988-5wsq8" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.243373 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.328499 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jzghj\" (UniqueName: \"kubernetes.io/projected/585f7d34-98bb-47d7-8a8a-6debb23e6a3c-kube-api-access-jzghj\") pod \"auto-csr-approver-29536988-5wsq8\" (UID: \"585f7d34-98bb-47d7-8a8a-6debb23e6a3c\") " pod="openshift-infra/auto-csr-approver-29536988-5wsq8" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.347518 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jzghj\" (UniqueName: \"kubernetes.io/projected/585f7d34-98bb-47d7-8a8a-6debb23e6a3c-kube-api-access-jzghj\") pod \"auto-csr-approver-29536988-5wsq8\" (UID: \"585f7d34-98bb-47d7-8a8a-6debb23e6a3c\") " pod="openshift-infra/auto-csr-approver-29536988-5wsq8" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.421486 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.470581 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.493284 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtdfb"] Feb 27 19:08:00 crc kubenswrapper[4708]: W0227 19:08:00.952642 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod585f7d34_98bb_47d7_8a8a_6debb23e6a3c.slice/crio-d6d286527f9883baa501aa6b8325873baef5447e2e105596dee993a818773d7c WatchSource:0}: Error finding container d6d286527f9883baa501aa6b8325873baef5447e2e105596dee993a818773d7c: Status 404 returned error can't find the container with id d6d286527f9883baa501aa6b8325873baef5447e2e105596dee993a818773d7c Feb 27 19:08:00 crc kubenswrapper[4708]: I0227 19:08:00.958219 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-5wsq8"] Feb 27 19:08:01 crc kubenswrapper[4708]: I0227 19:08:01.388158 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" event={"ID":"585f7d34-98bb-47d7-8a8a-6debb23e6a3c","Type":"ContainerStarted","Data":"d6d286527f9883baa501aa6b8325873baef5447e2e105596dee993a818773d7c"} Feb 27 19:08:02 crc kubenswrapper[4708]: I0227 19:08:02.244231 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:08:02 crc kubenswrapper[4708]: E0227 19:08:02.244565 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:08:02 crc kubenswrapper[4708]: I0227 19:08:02.399435 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qtdfb" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="registry-server" containerID="cri-o://48924da55393ec72fd1dde576b24d2f079286643a2900a660cf0ac9690b15aa8" gracePeriod=2 Feb 27 19:08:02 crc kubenswrapper[4708]: E0227 19:08:02.658967 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbdb5a1a_a20c_4432_8842_f0c33a2cc97e.slice/crio-48924da55393ec72fd1dde576b24d2f079286643a2900a660cf0ac9690b15aa8.scope\": RecentStats: unable to find data in memory cache]" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.411063 4708 generic.go:334] "Generic (PLEG): container finished" podID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerID="48924da55393ec72fd1dde576b24d2f079286643a2900a660cf0ac9690b15aa8" exitCode=0 Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.411386 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdfb" event={"ID":"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e","Type":"ContainerDied","Data":"48924da55393ec72fd1dde576b24d2f079286643a2900a660cf0ac9690b15aa8"} Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.411418 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qtdfb" event={"ID":"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e","Type":"ContainerDied","Data":"7a6e2ecfa4ca17ccebaa1c88f6e5d164bf055298c972d00764a5b15420f216f9"} Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.411432 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a6e2ecfa4ca17ccebaa1c88f6e5d164bf055298c972d00764a5b15420f216f9" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.413333 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" event={"ID":"585f7d34-98bb-47d7-8a8a-6debb23e6a3c","Type":"ContainerStarted","Data":"f252ac7edbe6bceb9caf575ec3b69bd96cd03c676f70852edd7e19887f59fe85"} Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.429426 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" podStartSLOduration=1.402573969 podStartE2EDuration="3.429409645s" podCreationTimestamp="2026-02-27 19:08:00 +0000 UTC" firstStartedPulling="2026-02-27 19:08:00.956386159 +0000 UTC m=+8079.472183756" lastFinishedPulling="2026-02-27 19:08:02.983221845 +0000 UTC m=+8081.499019432" observedRunningTime="2026-02-27 19:08:03.426508363 +0000 UTC m=+8081.942305960" watchObservedRunningTime="2026-02-27 19:08:03.429409645 +0000 UTC m=+8081.945207242" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.502337 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.603331 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-catalog-content\") pod \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.603410 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-utilities\") pod \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.603438 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4fv6h\" (UniqueName: \"kubernetes.io/projected/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-kube-api-access-4fv6h\") pod \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\" (UID: \"cbdb5a1a-a20c-4432-8842-f0c33a2cc97e\") " Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.605982 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-utilities" (OuterVolumeSpecName: "utilities") pod "cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" (UID: "cbdb5a1a-a20c-4432-8842-f0c33a2cc97e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.611068 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-kube-api-access-4fv6h" (OuterVolumeSpecName: "kube-api-access-4fv6h") pod "cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" (UID: "cbdb5a1a-a20c-4432-8842-f0c33a2cc97e"). InnerVolumeSpecName "kube-api-access-4fv6h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.705956 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.705989 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4fv6h\" (UniqueName: \"kubernetes.io/projected/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-kube-api-access-4fv6h\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.727729 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" (UID: "cbdb5a1a-a20c-4432-8842-f0c33a2cc97e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:08:03 crc kubenswrapper[4708]: I0227 19:08:03.808332 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:04 crc kubenswrapper[4708]: I0227 19:08:04.423270 4708 generic.go:334] "Generic (PLEG): container finished" podID="585f7d34-98bb-47d7-8a8a-6debb23e6a3c" containerID="f252ac7edbe6bceb9caf575ec3b69bd96cd03c676f70852edd7e19887f59fe85" exitCode=0 Feb 27 19:08:04 crc kubenswrapper[4708]: I0227 19:08:04.423376 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qtdfb" Feb 27 19:08:04 crc kubenswrapper[4708]: I0227 19:08:04.423412 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" event={"ID":"585f7d34-98bb-47d7-8a8a-6debb23e6a3c","Type":"ContainerDied","Data":"f252ac7edbe6bceb9caf575ec3b69bd96cd03c676f70852edd7e19887f59fe85"} Feb 27 19:08:04 crc kubenswrapper[4708]: I0227 19:08:04.472146 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qtdfb"] Feb 27 19:08:04 crc kubenswrapper[4708]: I0227 19:08:04.484016 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qtdfb"] Feb 27 19:08:05 crc kubenswrapper[4708]: I0227 19:08:05.931508 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.062086 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jzghj\" (UniqueName: \"kubernetes.io/projected/585f7d34-98bb-47d7-8a8a-6debb23e6a3c-kube-api-access-jzghj\") pod \"585f7d34-98bb-47d7-8a8a-6debb23e6a3c\" (UID: \"585f7d34-98bb-47d7-8a8a-6debb23e6a3c\") " Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.082063 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585f7d34-98bb-47d7-8a8a-6debb23e6a3c-kube-api-access-jzghj" (OuterVolumeSpecName: "kube-api-access-jzghj") pod "585f7d34-98bb-47d7-8a8a-6debb23e6a3c" (UID: "585f7d34-98bb-47d7-8a8a-6debb23e6a3c"). InnerVolumeSpecName "kube-api-access-jzghj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.164607 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jzghj\" (UniqueName: \"kubernetes.io/projected/585f7d34-98bb-47d7-8a8a-6debb23e6a3c-kube-api-access-jzghj\") on node \"crc\" DevicePath \"\"" Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.241819 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" path="/var/lib/kubelet/pods/cbdb5a1a-a20c-4432-8842-f0c33a2cc97e/volumes" Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.442000 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" event={"ID":"585f7d34-98bb-47d7-8a8a-6debb23e6a3c","Type":"ContainerDied","Data":"d6d286527f9883baa501aa6b8325873baef5447e2e105596dee993a818773d7c"} Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.442284 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d6d286527f9883baa501aa6b8325873baef5447e2e105596dee993a818773d7c" Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.442052 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536988-5wsq8" Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.520030 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-n8h8w"] Feb 27 19:08:06 crc kubenswrapper[4708]: I0227 19:08:06.530885 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536982-n8h8w"] Feb 27 19:08:08 crc kubenswrapper[4708]: I0227 19:08:08.241500 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4450ef08-2763-4625-8614-879f04ceb032" path="/var/lib/kubelet/pods/4450ef08-2763-4625-8614-879f04ceb032/volumes" Feb 27 19:08:08 crc kubenswrapper[4708]: I0227 19:08:08.830539 4708 scope.go:117] "RemoveContainer" containerID="e0c7ed73f61d69e0050a2d6fc8f14907d03c5b4ae4480b788a0dd3acf9bc7b87" Feb 27 19:08:15 crc kubenswrapper[4708]: I0227 19:08:15.227953 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:08:15 crc kubenswrapper[4708]: E0227 19:08:15.229479 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:08:27 crc kubenswrapper[4708]: I0227 19:08:27.228505 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:08:27 crc kubenswrapper[4708]: E0227 19:08:27.229593 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:08:42 crc kubenswrapper[4708]: I0227 19:08:42.241856 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:08:42 crc kubenswrapper[4708]: E0227 19:08:42.243102 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:08:57 crc kubenswrapper[4708]: I0227 19:08:57.228793 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:08:57 crc kubenswrapper[4708]: E0227 19:08:57.229668 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:09:11 crc kubenswrapper[4708]: I0227 19:09:11.229190 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:09:11 crc kubenswrapper[4708]: E0227 19:09:11.230376 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:09:22 crc kubenswrapper[4708]: I0227 19:09:22.243518 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:09:22 crc kubenswrapper[4708]: E0227 19:09:22.244364 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:09:35 crc kubenswrapper[4708]: I0227 19:09:35.230619 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:09:35 crc kubenswrapper[4708]: E0227 19:09:35.231687 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:09:46 crc kubenswrapper[4708]: I0227 19:09:46.228632 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:09:46 crc kubenswrapper[4708]: E0227 19:09:46.229323 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:09:58 crc kubenswrapper[4708]: I0227 19:09:58.228577 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:09:58 crc kubenswrapper[4708]: E0227 19:09:58.229468 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.155586 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536990-6kktn"] Feb 27 19:10:00 crc kubenswrapper[4708]: E0227 19:10:00.156334 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585f7d34-98bb-47d7-8a8a-6debb23e6a3c" containerName="oc" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.156350 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="585f7d34-98bb-47d7-8a8a-6debb23e6a3c" containerName="oc" Feb 27 19:10:00 crc kubenswrapper[4708]: E0227 19:10:00.156383 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="extract-content" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.156389 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="extract-content" Feb 27 19:10:00 crc kubenswrapper[4708]: E0227 19:10:00.156408 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="registry-server" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.156414 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="registry-server" Feb 27 19:10:00 crc kubenswrapper[4708]: E0227 19:10:00.156423 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="extract-utilities" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.156429 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="extract-utilities" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.156604 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbdb5a1a-a20c-4432-8842-f0c33a2cc97e" containerName="registry-server" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.156620 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="585f7d34-98bb-47d7-8a8a-6debb23e6a3c" containerName="oc" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.157343 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-6kktn" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.161753 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.162778 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.164910 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.183531 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-6kktn"] Feb 27 19:10:00 crc kubenswrapper[4708]: I0227 19:10:00.953713 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6m49c\" (UniqueName: \"kubernetes.io/projected/a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d-kube-api-access-6m49c\") pod \"auto-csr-approver-29536990-6kktn\" (UID: \"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d\") " pod="openshift-infra/auto-csr-approver-29536990-6kktn" Feb 27 19:10:01 crc kubenswrapper[4708]: I0227 19:10:01.055453 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6m49c\" (UniqueName: \"kubernetes.io/projected/a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d-kube-api-access-6m49c\") pod \"auto-csr-approver-29536990-6kktn\" (UID: \"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d\") " pod="openshift-infra/auto-csr-approver-29536990-6kktn" Feb 27 19:10:01 crc kubenswrapper[4708]: I0227 19:10:01.087167 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6m49c\" (UniqueName: \"kubernetes.io/projected/a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d-kube-api-access-6m49c\") pod \"auto-csr-approver-29536990-6kktn\" (UID: \"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d\") " pod="openshift-infra/auto-csr-approver-29536990-6kktn" Feb 27 19:10:01 crc kubenswrapper[4708]: I0227 19:10:01.294055 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-6kktn" Feb 27 19:10:01 crc kubenswrapper[4708]: I0227 19:10:01.765598 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-6kktn"] Feb 27 19:10:02 crc kubenswrapper[4708]: I0227 19:10:02.036404 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-6kktn" event={"ID":"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d","Type":"ContainerStarted","Data":"33d87919840246dde431369e487b3fef26947bd3bbd9eeae3b76802a19a0f0d8"} Feb 27 19:10:05 crc kubenswrapper[4708]: I0227 19:10:05.071830 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-6kktn" event={"ID":"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d","Type":"ContainerStarted","Data":"63f276904218f3e28c768cbb7156974463b3e6543d2a06b166f62853508fac5c"} Feb 27 19:10:05 crc kubenswrapper[4708]: I0227 19:10:05.092164 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536990-6kktn" podStartSLOduration=2.114260377 podStartE2EDuration="5.092144578s" podCreationTimestamp="2026-02-27 19:10:00 +0000 UTC" firstStartedPulling="2026-02-27 19:10:01.780931239 +0000 UTC m=+8200.296728826" lastFinishedPulling="2026-02-27 19:10:04.75881541 +0000 UTC m=+8203.274613027" observedRunningTime="2026-02-27 19:10:05.088260798 +0000 UTC m=+8203.604058385" watchObservedRunningTime="2026-02-27 19:10:05.092144578 +0000 UTC m=+8203.607942175" Feb 27 19:10:06 crc kubenswrapper[4708]: I0227 19:10:06.105903 4708 generic.go:334] "Generic (PLEG): container finished" podID="a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d" containerID="63f276904218f3e28c768cbb7156974463b3e6543d2a06b166f62853508fac5c" exitCode=0 Feb 27 19:10:06 crc kubenswrapper[4708]: I0227 19:10:06.106136 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-6kktn" event={"ID":"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d","Type":"ContainerDied","Data":"63f276904218f3e28c768cbb7156974463b3e6543d2a06b166f62853508fac5c"} Feb 27 19:10:07 crc kubenswrapper[4708]: I0227 19:10:07.596538 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-6kktn" Feb 27 19:10:07 crc kubenswrapper[4708]: I0227 19:10:07.607812 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6m49c\" (UniqueName: \"kubernetes.io/projected/a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d-kube-api-access-6m49c\") pod \"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d\" (UID: \"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d\") " Feb 27 19:10:07 crc kubenswrapper[4708]: I0227 19:10:07.626194 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d-kube-api-access-6m49c" (OuterVolumeSpecName: "kube-api-access-6m49c") pod "a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d" (UID: "a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d"). InnerVolumeSpecName "kube-api-access-6m49c". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:10:07 crc kubenswrapper[4708]: I0227 19:10:07.710263 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6m49c\" (UniqueName: \"kubernetes.io/projected/a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d-kube-api-access-6m49c\") on node \"crc\" DevicePath \"\"" Feb 27 19:10:08 crc kubenswrapper[4708]: I0227 19:10:08.156626 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536990-6kktn" event={"ID":"a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d","Type":"ContainerDied","Data":"33d87919840246dde431369e487b3fef26947bd3bbd9eeae3b76802a19a0f0d8"} Feb 27 19:10:08 crc kubenswrapper[4708]: I0227 19:10:08.156681 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="33d87919840246dde431369e487b3fef26947bd3bbd9eeae3b76802a19a0f0d8" Feb 27 19:10:08 crc kubenswrapper[4708]: I0227 19:10:08.156763 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536990-6kktn" Feb 27 19:10:08 crc kubenswrapper[4708]: I0227 19:10:08.197827 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-78mrn"] Feb 27 19:10:08 crc kubenswrapper[4708]: I0227 19:10:08.210062 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536984-78mrn"] Feb 27 19:10:08 crc kubenswrapper[4708]: I0227 19:10:08.241026 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e1b7922-e103-4699-bfff-094c8a63fe68" path="/var/lib/kubelet/pods/0e1b7922-e103-4699-bfff-094c8a63fe68/volumes" Feb 27 19:10:08 crc kubenswrapper[4708]: I0227 19:10:08.962099 4708 scope.go:117] "RemoveContainer" containerID="6d044d215c7f13d1c3c85aa89fd0581fa5034a0ea231dc47748b3d6c9b1c0c99" Feb 27 19:10:10 crc kubenswrapper[4708]: I0227 19:10:10.228514 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:10:10 crc kubenswrapper[4708]: E0227 19:10:10.229086 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:10:23 crc kubenswrapper[4708]: I0227 19:10:23.229088 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:10:23 crc kubenswrapper[4708]: E0227 19:10:23.230291 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:10:38 crc kubenswrapper[4708]: I0227 19:10:38.235022 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:10:38 crc kubenswrapper[4708]: E0227 19:10:38.235742 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:10:51 crc kubenswrapper[4708]: I0227 19:10:51.229519 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:10:51 crc kubenswrapper[4708]: E0227 19:10:51.230609 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:11:06 crc kubenswrapper[4708]: I0227 19:11:06.229389 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:11:06 crc kubenswrapper[4708]: E0227 19:11:06.230278 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:11:17 crc kubenswrapper[4708]: I0227 19:11:17.229122 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:11:17 crc kubenswrapper[4708]: E0227 19:11:17.230411 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:11:31 crc kubenswrapper[4708]: I0227 19:11:31.229686 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:11:31 crc kubenswrapper[4708]: E0227 19:11:31.231659 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:11:43 crc kubenswrapper[4708]: I0227 19:11:43.229554 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:11:43 crc kubenswrapper[4708]: I0227 19:11:43.509242 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"79dfd594e3849e72676ede36539df4697e5b55e75e1d7950e08908821cac878c"} Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.429775 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8h6db"] Feb 27 19:11:44 crc kubenswrapper[4708]: E0227 19:11:44.430547 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d" containerName="oc" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.430563 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d" containerName="oc" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.430865 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d" containerName="oc" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.434306 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.446310 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-catalog-content\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.446718 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-276kg\" (UniqueName: \"kubernetes.io/projected/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-kube-api-access-276kg\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.446742 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-utilities\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.446542 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8h6db"] Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.548968 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-catalog-content\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.549112 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-276kg\" (UniqueName: \"kubernetes.io/projected/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-kube-api-access-276kg\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.549149 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-utilities\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.549562 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-catalog-content\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.549763 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-utilities\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:44 crc kubenswrapper[4708]: I0227 19:11:44.583300 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-276kg\" (UniqueName: \"kubernetes.io/projected/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-kube-api-access-276kg\") pod \"certified-operators-8h6db\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:45 crc kubenswrapper[4708]: I0227 19:11:45.700197 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:46 crc kubenswrapper[4708]: I0227 19:11:46.286220 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8h6db"] Feb 27 19:11:46 crc kubenswrapper[4708]: I0227 19:11:46.744433 4708 generic.go:334] "Generic (PLEG): container finished" podID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerID="43d5f33b7a54e0516782cca39f1b595d79fe23da02556f2094534adfffb18bf5" exitCode=0 Feb 27 19:11:46 crc kubenswrapper[4708]: I0227 19:11:46.744532 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h6db" event={"ID":"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a","Type":"ContainerDied","Data":"43d5f33b7a54e0516782cca39f1b595d79fe23da02556f2094534adfffb18bf5"} Feb 27 19:11:46 crc kubenswrapper[4708]: I0227 19:11:46.744968 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h6db" event={"ID":"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a","Type":"ContainerStarted","Data":"3d3267c517717dc5be4403e1480ea7940a17bb837e1f4016aee24a4ae2fc83f3"} Feb 27 19:11:49 crc kubenswrapper[4708]: I0227 19:11:49.781773 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h6db" event={"ID":"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a","Type":"ContainerStarted","Data":"4f710f6b3198a320f98b79ba47aaf3e77f1897cc5f93d63618e1b078c9955f00"} Feb 27 19:11:50 crc kubenswrapper[4708]: I0227 19:11:50.798535 4708 generic.go:334] "Generic (PLEG): container finished" podID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerID="4f710f6b3198a320f98b79ba47aaf3e77f1897cc5f93d63618e1b078c9955f00" exitCode=0 Feb 27 19:11:50 crc kubenswrapper[4708]: I0227 19:11:50.798972 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h6db" event={"ID":"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a","Type":"ContainerDied","Data":"4f710f6b3198a320f98b79ba47aaf3e77f1897cc5f93d63618e1b078c9955f00"} Feb 27 19:11:51 crc kubenswrapper[4708]: I0227 19:11:51.810101 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h6db" event={"ID":"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a","Type":"ContainerStarted","Data":"59725140e9476e3e8940acc2f497981839ab8572071142f50d08534aac053236"} Feb 27 19:11:52 crc kubenswrapper[4708]: I0227 19:11:52.856449 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8h6db" podStartSLOduration=4.112293534 podStartE2EDuration="8.85643109s" podCreationTimestamp="2026-02-27 19:11:44 +0000 UTC" firstStartedPulling="2026-02-27 19:11:46.746460275 +0000 UTC m=+8305.262257862" lastFinishedPulling="2026-02-27 19:11:51.490597831 +0000 UTC m=+8310.006395418" observedRunningTime="2026-02-27 19:11:52.841005144 +0000 UTC m=+8311.356802741" watchObservedRunningTime="2026-02-27 19:11:52.85643109 +0000 UTC m=+8311.372228667" Feb 27 19:11:55 crc kubenswrapper[4708]: I0227 19:11:55.706548 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:55 crc kubenswrapper[4708]: I0227 19:11:55.707637 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:11:55 crc kubenswrapper[4708]: I0227 19:11:55.783749 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.162948 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536992-nlhjs"] Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.165495 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-nlhjs" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.168385 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.169180 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.173714 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-nlhjs"] Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.244781 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.347396 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gcknt\" (UniqueName: \"kubernetes.io/projected/e7fbac4f-74c3-423a-9466-731b20defbb5-kube-api-access-gcknt\") pod \"auto-csr-approver-29536992-nlhjs\" (UID: \"e7fbac4f-74c3-423a-9466-731b20defbb5\") " pod="openshift-infra/auto-csr-approver-29536992-nlhjs" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.449565 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gcknt\" (UniqueName: \"kubernetes.io/projected/e7fbac4f-74c3-423a-9466-731b20defbb5-kube-api-access-gcknt\") pod \"auto-csr-approver-29536992-nlhjs\" (UID: \"e7fbac4f-74c3-423a-9466-731b20defbb5\") " pod="openshift-infra/auto-csr-approver-29536992-nlhjs" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.482769 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gcknt\" (UniqueName: \"kubernetes.io/projected/e7fbac4f-74c3-423a-9466-731b20defbb5-kube-api-access-gcknt\") pod \"auto-csr-approver-29536992-nlhjs\" (UID: \"e7fbac4f-74c3-423a-9466-731b20defbb5\") " pod="openshift-infra/auto-csr-approver-29536992-nlhjs" Feb 27 19:12:00 crc kubenswrapper[4708]: I0227 19:12:00.562473 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-nlhjs" Feb 27 19:12:01 crc kubenswrapper[4708]: I0227 19:12:01.076357 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-nlhjs"] Feb 27 19:12:01 crc kubenswrapper[4708]: W0227 19:12:01.079124 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode7fbac4f_74c3_423a_9466_731b20defbb5.slice/crio-1193ca8c77627aea8cfb8c236f91353a04243203d1425489cbe3caa6e1738771 WatchSource:0}: Error finding container 1193ca8c77627aea8cfb8c236f91353a04243203d1425489cbe3caa6e1738771: Status 404 returned error can't find the container with id 1193ca8c77627aea8cfb8c236f91353a04243203d1425489cbe3caa6e1738771 Feb 27 19:12:01 crc kubenswrapper[4708]: I0227 19:12:01.947562 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536992-nlhjs" event={"ID":"e7fbac4f-74c3-423a-9466-731b20defbb5","Type":"ContainerStarted","Data":"1193ca8c77627aea8cfb8c236f91353a04243203d1425489cbe3caa6e1738771"} Feb 27 19:12:03 crc kubenswrapper[4708]: I0227 19:12:03.968931 4708 generic.go:334] "Generic (PLEG): container finished" podID="e7fbac4f-74c3-423a-9466-731b20defbb5" containerID="883cb2348333a0e4ca895fef9d863464623e98338ecef04fba320b32eb4c4e1d" exitCode=0 Feb 27 19:12:03 crc kubenswrapper[4708]: I0227 19:12:03.968974 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536992-nlhjs" event={"ID":"e7fbac4f-74c3-423a-9466-731b20defbb5","Type":"ContainerDied","Data":"883cb2348333a0e4ca895fef9d863464623e98338ecef04fba320b32eb4c4e1d"} Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.424893 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-nlhjs" Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.573214 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gcknt\" (UniqueName: \"kubernetes.io/projected/e7fbac4f-74c3-423a-9466-731b20defbb5-kube-api-access-gcknt\") pod \"e7fbac4f-74c3-423a-9466-731b20defbb5\" (UID: \"e7fbac4f-74c3-423a-9466-731b20defbb5\") " Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.580080 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7fbac4f-74c3-423a-9466-731b20defbb5-kube-api-access-gcknt" (OuterVolumeSpecName: "kube-api-access-gcknt") pod "e7fbac4f-74c3-423a-9466-731b20defbb5" (UID: "e7fbac4f-74c3-423a-9466-731b20defbb5"). InnerVolumeSpecName "kube-api-access-gcknt". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.675950 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gcknt\" (UniqueName: \"kubernetes.io/projected/e7fbac4f-74c3-423a-9466-731b20defbb5-kube-api-access-gcknt\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.764740 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.814892 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8h6db"] Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.993897 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536992-nlhjs" Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.993903 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536992-nlhjs" event={"ID":"e7fbac4f-74c3-423a-9466-731b20defbb5","Type":"ContainerDied","Data":"1193ca8c77627aea8cfb8c236f91353a04243203d1425489cbe3caa6e1738771"} Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.994481 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1193ca8c77627aea8cfb8c236f91353a04243203d1425489cbe3caa6e1738771" Feb 27 19:12:05 crc kubenswrapper[4708]: I0227 19:12:05.994098 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8h6db" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="registry-server" containerID="cri-o://59725140e9476e3e8940acc2f497981839ab8572071142f50d08534aac053236" gracePeriod=2 Feb 27 19:12:06 crc kubenswrapper[4708]: I0227 19:12:06.519126 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-wg4ml"] Feb 27 19:12:06 crc kubenswrapper[4708]: I0227 19:12:06.532647 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536986-wg4ml"] Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.007490 4708 generic.go:334] "Generic (PLEG): container finished" podID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerID="59725140e9476e3e8940acc2f497981839ab8572071142f50d08534aac053236" exitCode=0 Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.007568 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h6db" event={"ID":"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a","Type":"ContainerDied","Data":"59725140e9476e3e8940acc2f497981839ab8572071142f50d08534aac053236"} Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.183449 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.327472 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-catalog-content\") pod \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.327594 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-276kg\" (UniqueName: \"kubernetes.io/projected/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-kube-api-access-276kg\") pod \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.327644 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-utilities\") pod \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\" (UID: \"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a\") " Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.328729 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-utilities" (OuterVolumeSpecName: "utilities") pod "bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" (UID: "bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.333714 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-kube-api-access-276kg" (OuterVolumeSpecName: "kube-api-access-276kg") pod "bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" (UID: "bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a"). InnerVolumeSpecName "kube-api-access-276kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.396695 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" (UID: "bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.431297 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.431329 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-276kg\" (UniqueName: \"kubernetes.io/projected/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-kube-api-access-276kg\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:07 crc kubenswrapper[4708]: I0227 19:12:07.431340 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.023546 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8h6db" event={"ID":"bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a","Type":"ContainerDied","Data":"3d3267c517717dc5be4403e1480ea7940a17bb837e1f4016aee24a4ae2fc83f3"} Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.023653 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8h6db" Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.023942 4708 scope.go:117] "RemoveContainer" containerID="59725140e9476e3e8940acc2f497981839ab8572071142f50d08534aac053236" Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.048657 4708 scope.go:117] "RemoveContainer" containerID="4f710f6b3198a320f98b79ba47aaf3e77f1897cc5f93d63618e1b078c9955f00" Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.083022 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8h6db"] Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.093638 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8h6db"] Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.164675 4708 scope.go:117] "RemoveContainer" containerID="43d5f33b7a54e0516782cca39f1b595d79fe23da02556f2094534adfffb18bf5" Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.239412 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07c26e19-8fab-4604-bf17-c7d15e3c05e0" path="/var/lib/kubelet/pods/07c26e19-8fab-4604-bf17-c7d15e3c05e0/volumes" Feb 27 19:12:08 crc kubenswrapper[4708]: I0227 19:12:08.240289 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" path="/var/lib/kubelet/pods/bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a/volumes" Feb 27 19:12:09 crc kubenswrapper[4708]: I0227 19:12:09.071272 4708 scope.go:117] "RemoveContainer" containerID="7f19bb251e501ded17cf1284dbecdbbc1a58afa18ce6d6d826ca28a1be6a2182" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.663291 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gh96z"] Feb 27 19:12:19 crc kubenswrapper[4708]: E0227 19:12:19.664390 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="extract-content" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.664411 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="extract-content" Feb 27 19:12:19 crc kubenswrapper[4708]: E0227 19:12:19.664436 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="registry-server" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.664447 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="registry-server" Feb 27 19:12:19 crc kubenswrapper[4708]: E0227 19:12:19.664479 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="extract-utilities" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.664494 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="extract-utilities" Feb 27 19:12:19 crc kubenswrapper[4708]: E0227 19:12:19.664513 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7fbac4f-74c3-423a-9466-731b20defbb5" containerName="oc" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.664524 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7fbac4f-74c3-423a-9466-731b20defbb5" containerName="oc" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.664825 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfddfe7f-7736-446a-a0a2-5d6cdf8e7e7a" containerName="registry-server" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.664876 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7fbac4f-74c3-423a-9466-731b20defbb5" containerName="oc" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.666786 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.674612 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gh96z"] Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.797125 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-catalog-content\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.797364 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm9d4\" (UniqueName: \"kubernetes.io/projected/85910d55-231a-4583-8c9a-60da2e97fe47-kube-api-access-lm9d4\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.797717 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-utilities\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.899495 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-utilities\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.899611 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-catalog-content\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.899699 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm9d4\" (UniqueName: \"kubernetes.io/projected/85910d55-231a-4583-8c9a-60da2e97fe47-kube-api-access-lm9d4\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.900355 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-utilities\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.900447 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-catalog-content\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.919943 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm9d4\" (UniqueName: \"kubernetes.io/projected/85910d55-231a-4583-8c9a-60da2e97fe47-kube-api-access-lm9d4\") pod \"community-operators-gh96z\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:19 crc kubenswrapper[4708]: I0227 19:12:19.999454 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:20 crc kubenswrapper[4708]: I0227 19:12:20.576410 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gh96z"] Feb 27 19:12:21 crc kubenswrapper[4708]: I0227 19:12:21.201375 4708 generic.go:334] "Generic (PLEG): container finished" podID="85910d55-231a-4583-8c9a-60da2e97fe47" containerID="4d17246ecaa857b924ad7c333d47762b6da216c99e8c6e6ad2e3bae2e505ef84" exitCode=0 Feb 27 19:12:21 crc kubenswrapper[4708]: I0227 19:12:21.201446 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh96z" event={"ID":"85910d55-231a-4583-8c9a-60da2e97fe47","Type":"ContainerDied","Data":"4d17246ecaa857b924ad7c333d47762b6da216c99e8c6e6ad2e3bae2e505ef84"} Feb 27 19:12:21 crc kubenswrapper[4708]: I0227 19:12:21.201669 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh96z" event={"ID":"85910d55-231a-4583-8c9a-60da2e97fe47","Type":"ContainerStarted","Data":"4434c668f7f48120db8471fdc68c8858c56de9000f76d9f3ccac2a5f62c144e7"} Feb 27 19:12:24 crc kubenswrapper[4708]: I0227 19:12:24.240822 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh96z" event={"ID":"85910d55-231a-4583-8c9a-60da2e97fe47","Type":"ContainerStarted","Data":"3ae138ad24fcc0ae795a786635f5b238a91ef8dddf8890dd49570d401a28364d"} Feb 27 19:12:26 crc kubenswrapper[4708]: I0227 19:12:26.258011 4708 generic.go:334] "Generic (PLEG): container finished" podID="85910d55-231a-4583-8c9a-60da2e97fe47" containerID="3ae138ad24fcc0ae795a786635f5b238a91ef8dddf8890dd49570d401a28364d" exitCode=0 Feb 27 19:12:26 crc kubenswrapper[4708]: I0227 19:12:26.258099 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh96z" event={"ID":"85910d55-231a-4583-8c9a-60da2e97fe47","Type":"ContainerDied","Data":"3ae138ad24fcc0ae795a786635f5b238a91ef8dddf8890dd49570d401a28364d"} Feb 27 19:12:28 crc kubenswrapper[4708]: I0227 19:12:28.283234 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh96z" event={"ID":"85910d55-231a-4583-8c9a-60da2e97fe47","Type":"ContainerStarted","Data":"430d070b19be421309616adf5cd964a9a485f564f2ed7a5f4fa965288de55f70"} Feb 27 19:12:28 crc kubenswrapper[4708]: I0227 19:12:28.305137 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gh96z" podStartSLOduration=3.151440916 podStartE2EDuration="9.305120914s" podCreationTimestamp="2026-02-27 19:12:19 +0000 UTC" firstStartedPulling="2026-02-27 19:12:21.214458278 +0000 UTC m=+8339.730255865" lastFinishedPulling="2026-02-27 19:12:27.368138276 +0000 UTC m=+8345.883935863" observedRunningTime="2026-02-27 19:12:28.29965987 +0000 UTC m=+8346.815457467" watchObservedRunningTime="2026-02-27 19:12:28.305120914 +0000 UTC m=+8346.820918501" Feb 27 19:12:30 crc kubenswrapper[4708]: I0227 19:12:30.000528 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:30 crc kubenswrapper[4708]: I0227 19:12:30.000812 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:30 crc kubenswrapper[4708]: I0227 19:12:30.050980 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:40 crc kubenswrapper[4708]: I0227 19:12:40.076240 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:40 crc kubenswrapper[4708]: I0227 19:12:40.154112 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gh96z"] Feb 27 19:12:40 crc kubenswrapper[4708]: I0227 19:12:40.416695 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-gh96z" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="registry-server" containerID="cri-o://430d070b19be421309616adf5cd964a9a485f564f2ed7a5f4fa965288de55f70" gracePeriod=2 Feb 27 19:12:40 crc kubenswrapper[4708]: E0227 19:12:40.660022 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85910d55_231a_4583_8c9a_60da2e97fe47.slice/crio-430d070b19be421309616adf5cd964a9a485f564f2ed7a5f4fa965288de55f70.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod85910d55_231a_4583_8c9a_60da2e97fe47.slice/crio-conmon-430d070b19be421309616adf5cd964a9a485f564f2ed7a5f4fa965288de55f70.scope\": RecentStats: unable to find data in memory cache]" Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.432709 4708 generic.go:334] "Generic (PLEG): container finished" podID="85910d55-231a-4583-8c9a-60da2e97fe47" containerID="430d070b19be421309616adf5cd964a9a485f564f2ed7a5f4fa965288de55f70" exitCode=0 Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.432757 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh96z" event={"ID":"85910d55-231a-4583-8c9a-60da2e97fe47","Type":"ContainerDied","Data":"430d070b19be421309616adf5cd964a9a485f564f2ed7a5f4fa965288de55f70"} Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.757558 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.893178 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-utilities\") pod \"85910d55-231a-4583-8c9a-60da2e97fe47\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.893354 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lm9d4\" (UniqueName: \"kubernetes.io/projected/85910d55-231a-4583-8c9a-60da2e97fe47-kube-api-access-lm9d4\") pod \"85910d55-231a-4583-8c9a-60da2e97fe47\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.893430 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-catalog-content\") pod \"85910d55-231a-4583-8c9a-60da2e97fe47\" (UID: \"85910d55-231a-4583-8c9a-60da2e97fe47\") " Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.894873 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-utilities" (OuterVolumeSpecName: "utilities") pod "85910d55-231a-4583-8c9a-60da2e97fe47" (UID: "85910d55-231a-4583-8c9a-60da2e97fe47"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.898513 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85910d55-231a-4583-8c9a-60da2e97fe47-kube-api-access-lm9d4" (OuterVolumeSpecName: "kube-api-access-lm9d4") pod "85910d55-231a-4583-8c9a-60da2e97fe47" (UID: "85910d55-231a-4583-8c9a-60da2e97fe47"). InnerVolumeSpecName "kube-api-access-lm9d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.962293 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "85910d55-231a-4583-8c9a-60da2e97fe47" (UID: "85910d55-231a-4583-8c9a-60da2e97fe47"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.996588 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.996640 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lm9d4\" (UniqueName: \"kubernetes.io/projected/85910d55-231a-4583-8c9a-60da2e97fe47-kube-api-access-lm9d4\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:41 crc kubenswrapper[4708]: I0227 19:12:41.996654 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/85910d55-231a-4583-8c9a-60da2e97fe47-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:12:42 crc kubenswrapper[4708]: I0227 19:12:42.449657 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gh96z" event={"ID":"85910d55-231a-4583-8c9a-60da2e97fe47","Type":"ContainerDied","Data":"4434c668f7f48120db8471fdc68c8858c56de9000f76d9f3ccac2a5f62c144e7"} Feb 27 19:12:42 crc kubenswrapper[4708]: I0227 19:12:42.449730 4708 scope.go:117] "RemoveContainer" containerID="430d070b19be421309616adf5cd964a9a485f564f2ed7a5f4fa965288de55f70" Feb 27 19:12:42 crc kubenswrapper[4708]: I0227 19:12:42.449791 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gh96z" Feb 27 19:12:42 crc kubenswrapper[4708]: I0227 19:12:42.498663 4708 scope.go:117] "RemoveContainer" containerID="3ae138ad24fcc0ae795a786635f5b238a91ef8dddf8890dd49570d401a28364d" Feb 27 19:12:42 crc kubenswrapper[4708]: I0227 19:12:42.504792 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-gh96z"] Feb 27 19:12:42 crc kubenswrapper[4708]: I0227 19:12:42.530843 4708 scope.go:117] "RemoveContainer" containerID="4d17246ecaa857b924ad7c333d47762b6da216c99e8c6e6ad2e3bae2e505ef84" Feb 27 19:12:42 crc kubenswrapper[4708]: I0227 19:12:42.535417 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-gh96z"] Feb 27 19:12:44 crc kubenswrapper[4708]: I0227 19:12:44.241071 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" path="/var/lib/kubelet/pods/85910d55-231a-4583-8c9a-60da2e97fe47/volumes" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.153028 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536994-8p9nr"] Feb 27 19:14:00 crc kubenswrapper[4708]: E0227 19:14:00.154060 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="extract-content" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.154074 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="extract-content" Feb 27 19:14:00 crc kubenswrapper[4708]: E0227 19:14:00.154095 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="registry-server" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.154103 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="registry-server" Feb 27 19:14:00 crc kubenswrapper[4708]: E0227 19:14:00.154119 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="extract-utilities" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.154128 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="extract-utilities" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.154330 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="85910d55-231a-4583-8c9a-60da2e97fe47" containerName="registry-server" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.155398 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-8p9nr" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.157497 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.157620 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.158158 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.165029 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-8p9nr"] Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.323640 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksln4\" (UniqueName: \"kubernetes.io/projected/fc148c18-3668-442e-857c-11ffe9cb0b1c-kube-api-access-ksln4\") pod \"auto-csr-approver-29536994-8p9nr\" (UID: \"fc148c18-3668-442e-857c-11ffe9cb0b1c\") " pod="openshift-infra/auto-csr-approver-29536994-8p9nr" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.426214 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksln4\" (UniqueName: \"kubernetes.io/projected/fc148c18-3668-442e-857c-11ffe9cb0b1c-kube-api-access-ksln4\") pod \"auto-csr-approver-29536994-8p9nr\" (UID: \"fc148c18-3668-442e-857c-11ffe9cb0b1c\") " pod="openshift-infra/auto-csr-approver-29536994-8p9nr" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.445888 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksln4\" (UniqueName: \"kubernetes.io/projected/fc148c18-3668-442e-857c-11ffe9cb0b1c-kube-api-access-ksln4\") pod \"auto-csr-approver-29536994-8p9nr\" (UID: \"fc148c18-3668-442e-857c-11ffe9cb0b1c\") " pod="openshift-infra/auto-csr-approver-29536994-8p9nr" Feb 27 19:14:00 crc kubenswrapper[4708]: I0227 19:14:00.494341 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-8p9nr" Feb 27 19:14:01 crc kubenswrapper[4708]: I0227 19:14:01.106899 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-8p9nr"] Feb 27 19:14:01 crc kubenswrapper[4708]: W0227 19:14:01.115337 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc148c18_3668_442e_857c_11ffe9cb0b1c.slice/crio-0edaeb40d9b5c0446ae3801fbf8435d266c201bed7e7f71f3c071118e550b7ec WatchSource:0}: Error finding container 0edaeb40d9b5c0446ae3801fbf8435d266c201bed7e7f71f3c071118e550b7ec: Status 404 returned error can't find the container with id 0edaeb40d9b5c0446ae3801fbf8435d266c201bed7e7f71f3c071118e550b7ec Feb 27 19:14:01 crc kubenswrapper[4708]: I0227 19:14:01.120134 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:14:02 crc kubenswrapper[4708]: I0227 19:14:02.050747 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536994-8p9nr" event={"ID":"fc148c18-3668-442e-857c-11ffe9cb0b1c","Type":"ContainerStarted","Data":"0edaeb40d9b5c0446ae3801fbf8435d266c201bed7e7f71f3c071118e550b7ec"} Feb 27 19:14:03 crc kubenswrapper[4708]: I0227 19:14:03.064264 4708 generic.go:334] "Generic (PLEG): container finished" podID="fc148c18-3668-442e-857c-11ffe9cb0b1c" containerID="4619e1d0bdcc18ed7b05cd83c159ed98e9bc83fbe1b3423981ebe76ab1d01bcd" exitCode=0 Feb 27 19:14:03 crc kubenswrapper[4708]: I0227 19:14:03.064323 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536994-8p9nr" event={"ID":"fc148c18-3668-442e-857c-11ffe9cb0b1c","Type":"ContainerDied","Data":"4619e1d0bdcc18ed7b05cd83c159ed98e9bc83fbe1b3423981ebe76ab1d01bcd"} Feb 27 19:14:04 crc kubenswrapper[4708]: I0227 19:14:04.521436 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-8p9nr" Feb 27 19:14:04 crc kubenswrapper[4708]: I0227 19:14:04.616331 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksln4\" (UniqueName: \"kubernetes.io/projected/fc148c18-3668-442e-857c-11ffe9cb0b1c-kube-api-access-ksln4\") pod \"fc148c18-3668-442e-857c-11ffe9cb0b1c\" (UID: \"fc148c18-3668-442e-857c-11ffe9cb0b1c\") " Feb 27 19:14:04 crc kubenswrapper[4708]: I0227 19:14:04.626196 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc148c18-3668-442e-857c-11ffe9cb0b1c-kube-api-access-ksln4" (OuterVolumeSpecName: "kube-api-access-ksln4") pod "fc148c18-3668-442e-857c-11ffe9cb0b1c" (UID: "fc148c18-3668-442e-857c-11ffe9cb0b1c"). InnerVolumeSpecName "kube-api-access-ksln4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:14:04 crc kubenswrapper[4708]: I0227 19:14:04.719280 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksln4\" (UniqueName: \"kubernetes.io/projected/fc148c18-3668-442e-857c-11ffe9cb0b1c-kube-api-access-ksln4\") on node \"crc\" DevicePath \"\"" Feb 27 19:14:05 crc kubenswrapper[4708]: I0227 19:14:05.091387 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536994-8p9nr" event={"ID":"fc148c18-3668-442e-857c-11ffe9cb0b1c","Type":"ContainerDied","Data":"0edaeb40d9b5c0446ae3801fbf8435d266c201bed7e7f71f3c071118e550b7ec"} Feb 27 19:14:05 crc kubenswrapper[4708]: I0227 19:14:05.091426 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0edaeb40d9b5c0446ae3801fbf8435d266c201bed7e7f71f3c071118e550b7ec" Feb 27 19:14:05 crc kubenswrapper[4708]: I0227 19:14:05.091489 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536994-8p9nr" Feb 27 19:14:05 crc kubenswrapper[4708]: I0227 19:14:05.611273 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-5wsq8"] Feb 27 19:14:05 crc kubenswrapper[4708]: I0227 19:14:05.622956 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536988-5wsq8"] Feb 27 19:14:05 crc kubenswrapper[4708]: I0227 19:14:05.631755 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:14:05 crc kubenswrapper[4708]: I0227 19:14:05.631814 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:14:06 crc kubenswrapper[4708]: I0227 19:14:06.259658 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585f7d34-98bb-47d7-8a8a-6debb23e6a3c" path="/var/lib/kubelet/pods/585f7d34-98bb-47d7-8a8a-6debb23e6a3c/volumes" Feb 27 19:14:09 crc kubenswrapper[4708]: I0227 19:14:09.264906 4708 scope.go:117] "RemoveContainer" containerID="67b4461d43e355a1d22a38785f4408eac04f09d242009f3a3e7aba6f5383ab85" Feb 27 19:14:09 crc kubenswrapper[4708]: I0227 19:14:09.296197 4708 scope.go:117] "RemoveContainer" containerID="48924da55393ec72fd1dde576b24d2f079286643a2900a660cf0ac9690b15aa8" Feb 27 19:14:09 crc kubenswrapper[4708]: I0227 19:14:09.344817 4708 scope.go:117] "RemoveContainer" containerID="f252ac7edbe6bceb9caf575ec3b69bd96cd03c676f70852edd7e19887f59fe85" Feb 27 19:14:09 crc kubenswrapper[4708]: I0227 19:14:09.385232 4708 scope.go:117] "RemoveContainer" containerID="1723e54116ba9886f3999c449527aa59e3c99a53738695aa8409c65985fb38d8" Feb 27 19:14:35 crc kubenswrapper[4708]: I0227 19:14:35.631549 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:14:35 crc kubenswrapper[4708]: I0227 19:14:35.632172 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.142839 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh"] Feb 27 19:15:00 crc kubenswrapper[4708]: E0227 19:15:00.143710 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc148c18-3668-442e-857c-11ffe9cb0b1c" containerName="oc" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.143722 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc148c18-3668-442e-857c-11ffe9cb0b1c" containerName="oc" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.144397 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc148c18-3668-442e-857c-11ffe9cb0b1c" containerName="oc" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.145117 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.147363 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.147675 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.175988 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh"] Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.241119 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-config-volume\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.241181 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-secret-volume\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.241411 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64m7g\" (UniqueName: \"kubernetes.io/projected/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-kube-api-access-64m7g\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.343741 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-config-volume\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.344199 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-secret-volume\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.345136 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-config-volume\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.345409 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64m7g\" (UniqueName: \"kubernetes.io/projected/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-kube-api-access-64m7g\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.353137 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-secret-volume\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.364737 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64m7g\" (UniqueName: \"kubernetes.io/projected/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-kube-api-access-64m7g\") pod \"collect-profiles-29536995-s8lxh\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:00 crc kubenswrapper[4708]: I0227 19:15:00.511068 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:01 crc kubenswrapper[4708]: I0227 19:15:01.077922 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh"] Feb 27 19:15:01 crc kubenswrapper[4708]: I0227 19:15:01.755349 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" event={"ID":"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6","Type":"ContainerStarted","Data":"692a10f7a82c834bd318bf6ad678fcf78e7e03b09d95e3f21c8052fd2c2024ff"} Feb 27 19:15:01 crc kubenswrapper[4708]: I0227 19:15:01.755708 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" event={"ID":"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6","Type":"ContainerStarted","Data":"da76cb5ead243a8c511143a354a99e5c6f5bc37c80362b41201457006a7c3696"} Feb 27 19:15:01 crc kubenswrapper[4708]: I0227 19:15:01.780711 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" podStartSLOduration=1.780684229 podStartE2EDuration="1.780684229s" podCreationTimestamp="2026-02-27 19:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:15:01.776307486 +0000 UTC m=+8500.292105103" watchObservedRunningTime="2026-02-27 19:15:01.780684229 +0000 UTC m=+8500.296481836" Feb 27 19:15:02 crc kubenswrapper[4708]: I0227 19:15:02.766397 4708 generic.go:334] "Generic (PLEG): container finished" podID="4e4a36f1-349f-4b19-88a6-cd8a88e50fd6" containerID="692a10f7a82c834bd318bf6ad678fcf78e7e03b09d95e3f21c8052fd2c2024ff" exitCode=0 Feb 27 19:15:02 crc kubenswrapper[4708]: I0227 19:15:02.766464 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" event={"ID":"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6","Type":"ContainerDied","Data":"692a10f7a82c834bd318bf6ad678fcf78e7e03b09d95e3f21c8052fd2c2024ff"} Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.272897 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.345168 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-config-volume\") pod \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.345459 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-secret-volume\") pod \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.345585 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64m7g\" (UniqueName: \"kubernetes.io/projected/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-kube-api-access-64m7g\") pod \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\" (UID: \"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6\") " Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.345933 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-config-volume" (OuterVolumeSpecName: "config-volume") pod "4e4a36f1-349f-4b19-88a6-cd8a88e50fd6" (UID: "4e4a36f1-349f-4b19-88a6-cd8a88e50fd6"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.346338 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.351395 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "4e4a36f1-349f-4b19-88a6-cd8a88e50fd6" (UID: "4e4a36f1-349f-4b19-88a6-cd8a88e50fd6"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.351635 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-kube-api-access-64m7g" (OuterVolumeSpecName: "kube-api-access-64m7g") pod "4e4a36f1-349f-4b19-88a6-cd8a88e50fd6" (UID: "4e4a36f1-349f-4b19-88a6-cd8a88e50fd6"). InnerVolumeSpecName "kube-api-access-64m7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.448087 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.448118 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64m7g\" (UniqueName: \"kubernetes.io/projected/4e4a36f1-349f-4b19-88a6-cd8a88e50fd6-kube-api-access-64m7g\") on node \"crc\" DevicePath \"\"" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.810673 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" event={"ID":"4e4a36f1-349f-4b19-88a6-cd8a88e50fd6","Type":"ContainerDied","Data":"da76cb5ead243a8c511143a354a99e5c6f5bc37c80362b41201457006a7c3696"} Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.811077 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da76cb5ead243a8c511143a354a99e5c6f5bc37c80362b41201457006a7c3696" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.810727 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29536995-s8lxh" Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.887292 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj"] Feb 27 19:15:04 crc kubenswrapper[4708]: I0227 19:15:04.902673 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536950-skxdj"] Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.631392 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.631458 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.631500 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.632242 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"79dfd594e3849e72676ede36539df4697e5b55e75e1d7950e08908821cac878c"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.632293 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://79dfd594e3849e72676ede36539df4697e5b55e75e1d7950e08908821cac878c" gracePeriod=600 Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.823258 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="79dfd594e3849e72676ede36539df4697e5b55e75e1d7950e08908821cac878c" exitCode=0 Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.823331 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"79dfd594e3849e72676ede36539df4697e5b55e75e1d7950e08908821cac878c"} Feb 27 19:15:05 crc kubenswrapper[4708]: I0227 19:15:05.823627 4708 scope.go:117] "RemoveContainer" containerID="e4c799c0449e2822cfb0bdaf6798ad71f687ee51d26a071a31cfe7bd13669ace" Feb 27 19:15:06 crc kubenswrapper[4708]: I0227 19:15:06.249665 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2923d922-34e8-425a-9e01-131e2863d638" path="/var/lib/kubelet/pods/2923d922-34e8-425a-9e01-131e2863d638/volumes" Feb 27 19:15:06 crc kubenswrapper[4708]: I0227 19:15:06.834921 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f"} Feb 27 19:15:09 crc kubenswrapper[4708]: I0227 19:15:09.480168 4708 scope.go:117] "RemoveContainer" containerID="3205f02eed21b78710f4ae11cf95ebd3a93ac9253cd47f9536545ac8ba75b811" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.159158 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536996-pqzsw"] Feb 27 19:16:00 crc kubenswrapper[4708]: E0227 19:16:00.160744 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e4a36f1-349f-4b19-88a6-cd8a88e50fd6" containerName="collect-profiles" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.160785 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e4a36f1-349f-4b19-88a6-cd8a88e50fd6" containerName="collect-profiles" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.161441 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e4a36f1-349f-4b19-88a6-cd8a88e50fd6" containerName="collect-profiles" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.163340 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.166604 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.167252 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.169791 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.172093 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-pqzsw"] Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.250308 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpr5k\" (UniqueName: \"kubernetes.io/projected/5481f469-116a-4cf1-a9a5-396010496da0-kube-api-access-rpr5k\") pod \"auto-csr-approver-29536996-pqzsw\" (UID: \"5481f469-116a-4cf1-a9a5-396010496da0\") " pod="openshift-infra/auto-csr-approver-29536996-pqzsw" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.352934 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpr5k\" (UniqueName: \"kubernetes.io/projected/5481f469-116a-4cf1-a9a5-396010496da0-kube-api-access-rpr5k\") pod \"auto-csr-approver-29536996-pqzsw\" (UID: \"5481f469-116a-4cf1-a9a5-396010496da0\") " pod="openshift-infra/auto-csr-approver-29536996-pqzsw" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.372427 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpr5k\" (UniqueName: \"kubernetes.io/projected/5481f469-116a-4cf1-a9a5-396010496da0-kube-api-access-rpr5k\") pod \"auto-csr-approver-29536996-pqzsw\" (UID: \"5481f469-116a-4cf1-a9a5-396010496da0\") " pod="openshift-infra/auto-csr-approver-29536996-pqzsw" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.487904 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" Feb 27 19:16:00 crc kubenswrapper[4708]: I0227 19:16:00.935322 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-pqzsw"] Feb 27 19:16:00 crc kubenswrapper[4708]: W0227 19:16:00.936737 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5481f469_116a_4cf1_a9a5_396010496da0.slice/crio-54ed841e63c734fd589cab429953e1022ed58c2e4f9ee830a51cd14cf78544e6 WatchSource:0}: Error finding container 54ed841e63c734fd589cab429953e1022ed58c2e4f9ee830a51cd14cf78544e6: Status 404 returned error can't find the container with id 54ed841e63c734fd589cab429953e1022ed58c2e4f9ee830a51cd14cf78544e6 Feb 27 19:16:01 crc kubenswrapper[4708]: I0227 19:16:01.452971 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" event={"ID":"5481f469-116a-4cf1-a9a5-396010496da0","Type":"ContainerStarted","Data":"54ed841e63c734fd589cab429953e1022ed58c2e4f9ee830a51cd14cf78544e6"} Feb 27 19:16:02 crc kubenswrapper[4708]: I0227 19:16:02.467965 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" event={"ID":"5481f469-116a-4cf1-a9a5-396010496da0","Type":"ContainerStarted","Data":"980a3ac0984d114aeeebdaca046b073f7c924af88f8eeedfb7d8bd22f0df0b4f"} Feb 27 19:16:02 crc kubenswrapper[4708]: I0227 19:16:02.488450 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" podStartSLOduration=1.486860187 podStartE2EDuration="2.488432445s" podCreationTimestamp="2026-02-27 19:16:00 +0000 UTC" firstStartedPulling="2026-02-27 19:16:00.940442991 +0000 UTC m=+8559.456240578" lastFinishedPulling="2026-02-27 19:16:01.942015209 +0000 UTC m=+8560.457812836" observedRunningTime="2026-02-27 19:16:02.485788781 +0000 UTC m=+8561.001586378" watchObservedRunningTime="2026-02-27 19:16:02.488432445 +0000 UTC m=+8561.004230042" Feb 27 19:16:03 crc kubenswrapper[4708]: I0227 19:16:03.480357 4708 generic.go:334] "Generic (PLEG): container finished" podID="5481f469-116a-4cf1-a9a5-396010496da0" containerID="980a3ac0984d114aeeebdaca046b073f7c924af88f8eeedfb7d8bd22f0df0b4f" exitCode=0 Feb 27 19:16:03 crc kubenswrapper[4708]: I0227 19:16:03.480425 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" event={"ID":"5481f469-116a-4cf1-a9a5-396010496da0","Type":"ContainerDied","Data":"980a3ac0984d114aeeebdaca046b073f7c924af88f8eeedfb7d8bd22f0df0b4f"} Feb 27 19:16:04 crc kubenswrapper[4708]: I0227 19:16:04.909928 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" Feb 27 19:16:04 crc kubenswrapper[4708]: I0227 19:16:04.950927 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpr5k\" (UniqueName: \"kubernetes.io/projected/5481f469-116a-4cf1-a9a5-396010496da0-kube-api-access-rpr5k\") pod \"5481f469-116a-4cf1-a9a5-396010496da0\" (UID: \"5481f469-116a-4cf1-a9a5-396010496da0\") " Feb 27 19:16:04 crc kubenswrapper[4708]: I0227 19:16:04.957769 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5481f469-116a-4cf1-a9a5-396010496da0-kube-api-access-rpr5k" (OuterVolumeSpecName: "kube-api-access-rpr5k") pod "5481f469-116a-4cf1-a9a5-396010496da0" (UID: "5481f469-116a-4cf1-a9a5-396010496da0"). InnerVolumeSpecName "kube-api-access-rpr5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:05 crc kubenswrapper[4708]: I0227 19:16:05.054096 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpr5k\" (UniqueName: \"kubernetes.io/projected/5481f469-116a-4cf1-a9a5-396010496da0-kube-api-access-rpr5k\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:05 crc kubenswrapper[4708]: I0227 19:16:05.343871 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-6kktn"] Feb 27 19:16:05 crc kubenswrapper[4708]: I0227 19:16:05.355080 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536990-6kktn"] Feb 27 19:16:05 crc kubenswrapper[4708]: I0227 19:16:05.509909 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" event={"ID":"5481f469-116a-4cf1-a9a5-396010496da0","Type":"ContainerDied","Data":"54ed841e63c734fd589cab429953e1022ed58c2e4f9ee830a51cd14cf78544e6"} Feb 27 19:16:05 crc kubenswrapper[4708]: I0227 19:16:05.509947 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54ed841e63c734fd589cab429953e1022ed58c2e4f9ee830a51cd14cf78544e6" Feb 27 19:16:05 crc kubenswrapper[4708]: I0227 19:16:05.509952 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536996-pqzsw" Feb 27 19:16:06 crc kubenswrapper[4708]: I0227 19:16:06.242056 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d" path="/var/lib/kubelet/pods/a6d1ed9b-7087-4a6c-905c-cf9e4cd0826d/volumes" Feb 27 19:16:09 crc kubenswrapper[4708]: I0227 19:16:09.556179 4708 scope.go:117] "RemoveContainer" containerID="63f276904218f3e28c768cbb7156974463b3e6543d2a06b166f62853508fac5c" Feb 27 19:16:40 crc kubenswrapper[4708]: I0227 19:16:40.905629 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2hktm"] Feb 27 19:16:40 crc kubenswrapper[4708]: E0227 19:16:40.914374 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5481f469-116a-4cf1-a9a5-396010496da0" containerName="oc" Feb 27 19:16:40 crc kubenswrapper[4708]: I0227 19:16:40.914726 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5481f469-116a-4cf1-a9a5-396010496da0" containerName="oc" Feb 27 19:16:40 crc kubenswrapper[4708]: I0227 19:16:40.915141 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5481f469-116a-4cf1-a9a5-396010496da0" containerName="oc" Feb 27 19:16:40 crc kubenswrapper[4708]: I0227 19:16:40.917112 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:40 crc kubenswrapper[4708]: I0227 19:16:40.941972 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hktm"] Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.057431 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-catalog-content\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.057910 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-utilities\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.058024 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-554zf\" (UniqueName: \"kubernetes.io/projected/125c8347-57c7-4097-9138-a84306cc21ef-kube-api-access-554zf\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.159637 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-utilities\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.159700 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-554zf\" (UniqueName: \"kubernetes.io/projected/125c8347-57c7-4097-9138-a84306cc21ef-kube-api-access-554zf\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.159801 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-catalog-content\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.160266 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-utilities\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.160299 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-catalog-content\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.182631 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-554zf\" (UniqueName: \"kubernetes.io/projected/125c8347-57c7-4097-9138-a84306cc21ef-kube-api-access-554zf\") pod \"redhat-marketplace-2hktm\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.254781 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.767740 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hktm"] Feb 27 19:16:41 crc kubenswrapper[4708]: I0227 19:16:41.940207 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hktm" event={"ID":"125c8347-57c7-4097-9138-a84306cc21ef","Type":"ContainerStarted","Data":"594c8b3f06c4f9a7aaf8e3e34cf75d4f3bc80943ae4393f858fad64897e96b42"} Feb 27 19:16:42 crc kubenswrapper[4708]: I0227 19:16:42.951280 4708 generic.go:334] "Generic (PLEG): container finished" podID="125c8347-57c7-4097-9138-a84306cc21ef" containerID="de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b" exitCode=0 Feb 27 19:16:42 crc kubenswrapper[4708]: I0227 19:16:42.951332 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hktm" event={"ID":"125c8347-57c7-4097-9138-a84306cc21ef","Type":"ContainerDied","Data":"de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b"} Feb 27 19:16:44 crc kubenswrapper[4708]: I0227 19:16:44.997320 4708 generic.go:334] "Generic (PLEG): container finished" podID="125c8347-57c7-4097-9138-a84306cc21ef" containerID="48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6" exitCode=0 Feb 27 19:16:44 crc kubenswrapper[4708]: I0227 19:16:44.997432 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hktm" event={"ID":"125c8347-57c7-4097-9138-a84306cc21ef","Type":"ContainerDied","Data":"48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6"} Feb 27 19:16:46 crc kubenswrapper[4708]: I0227 19:16:46.008659 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hktm" event={"ID":"125c8347-57c7-4097-9138-a84306cc21ef","Type":"ContainerStarted","Data":"798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df"} Feb 27 19:16:46 crc kubenswrapper[4708]: I0227 19:16:46.026612 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2hktm" podStartSLOduration=3.56943304 podStartE2EDuration="6.026592657s" podCreationTimestamp="2026-02-27 19:16:40 +0000 UTC" firstStartedPulling="2026-02-27 19:16:42.953554275 +0000 UTC m=+8601.469351862" lastFinishedPulling="2026-02-27 19:16:45.410713882 +0000 UTC m=+8603.926511479" observedRunningTime="2026-02-27 19:16:46.025251699 +0000 UTC m=+8604.541049296" watchObservedRunningTime="2026-02-27 19:16:46.026592657 +0000 UTC m=+8604.542390244" Feb 27 19:16:51 crc kubenswrapper[4708]: I0227 19:16:51.255763 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:51 crc kubenswrapper[4708]: I0227 19:16:51.256113 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:51 crc kubenswrapper[4708]: I0227 19:16:51.304970 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:52 crc kubenswrapper[4708]: I0227 19:16:52.124652 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:52 crc kubenswrapper[4708]: I0227 19:16:52.174568 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hktm"] Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.089689 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2hktm" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="registry-server" containerID="cri-o://798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df" gracePeriod=2 Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.668276 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.743200 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-catalog-content\") pod \"125c8347-57c7-4097-9138-a84306cc21ef\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.743429 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-utilities\") pod \"125c8347-57c7-4097-9138-a84306cc21ef\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.743468 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-554zf\" (UniqueName: \"kubernetes.io/projected/125c8347-57c7-4097-9138-a84306cc21ef-kube-api-access-554zf\") pod \"125c8347-57c7-4097-9138-a84306cc21ef\" (UID: \"125c8347-57c7-4097-9138-a84306cc21ef\") " Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.744175 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-utilities" (OuterVolumeSpecName: "utilities") pod "125c8347-57c7-4097-9138-a84306cc21ef" (UID: "125c8347-57c7-4097-9138-a84306cc21ef"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.748425 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/125c8347-57c7-4097-9138-a84306cc21ef-kube-api-access-554zf" (OuterVolumeSpecName: "kube-api-access-554zf") pod "125c8347-57c7-4097-9138-a84306cc21ef" (UID: "125c8347-57c7-4097-9138-a84306cc21ef"). InnerVolumeSpecName "kube-api-access-554zf". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.768643 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "125c8347-57c7-4097-9138-a84306cc21ef" (UID: "125c8347-57c7-4097-9138-a84306cc21ef"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.845975 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.846003 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/125c8347-57c7-4097-9138-a84306cc21ef-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:54 crc kubenswrapper[4708]: I0227 19:16:54.846027 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-554zf\" (UniqueName: \"kubernetes.io/projected/125c8347-57c7-4097-9138-a84306cc21ef-kube-api-access-554zf\") on node \"crc\" DevicePath \"\"" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.105931 4708 generic.go:334] "Generic (PLEG): container finished" podID="125c8347-57c7-4097-9138-a84306cc21ef" containerID="798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df" exitCode=0 Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.105990 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hktm" event={"ID":"125c8347-57c7-4097-9138-a84306cc21ef","Type":"ContainerDied","Data":"798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df"} Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.106029 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2hktm" event={"ID":"125c8347-57c7-4097-9138-a84306cc21ef","Type":"ContainerDied","Data":"594c8b3f06c4f9a7aaf8e3e34cf75d4f3bc80943ae4393f858fad64897e96b42"} Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.106032 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2hktm" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.106046 4708 scope.go:117] "RemoveContainer" containerID="798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.138778 4708 scope.go:117] "RemoveContainer" containerID="48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.166607 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hktm"] Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.173010 4708 scope.go:117] "RemoveContainer" containerID="de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.177803 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2hktm"] Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.215309 4708 scope.go:117] "RemoveContainer" containerID="798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df" Feb 27 19:16:55 crc kubenswrapper[4708]: E0227 19:16:55.215728 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df\": container with ID starting with 798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df not found: ID does not exist" containerID="798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.215763 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df"} err="failed to get container status \"798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df\": rpc error: code = NotFound desc = could not find container \"798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df\": container with ID starting with 798a382e44b3f05420142860c01ffe7f6610b66c1f1d6ea544f345e27b16b1df not found: ID does not exist" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.215788 4708 scope.go:117] "RemoveContainer" containerID="48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6" Feb 27 19:16:55 crc kubenswrapper[4708]: E0227 19:16:55.216231 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6\": container with ID starting with 48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6 not found: ID does not exist" containerID="48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.216252 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6"} err="failed to get container status \"48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6\": rpc error: code = NotFound desc = could not find container \"48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6\": container with ID starting with 48815fe53500b34775f3d13a58e9220b8c9a3cdc760b82f66155a3faa2733cc6 not found: ID does not exist" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.216288 4708 scope.go:117] "RemoveContainer" containerID="de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b" Feb 27 19:16:55 crc kubenswrapper[4708]: E0227 19:16:55.216593 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b\": container with ID starting with de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b not found: ID does not exist" containerID="de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b" Feb 27 19:16:55 crc kubenswrapper[4708]: I0227 19:16:55.216646 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b"} err="failed to get container status \"de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b\": rpc error: code = NotFound desc = could not find container \"de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b\": container with ID starting with de3bd78197923667e642ab0a3725e6276b5f2d5ffb826e3682770d275a65f92b not found: ID does not exist" Feb 27 19:16:56 crc kubenswrapper[4708]: I0227 19:16:56.243780 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="125c8347-57c7-4097-9138-a84306cc21ef" path="/var/lib/kubelet/pods/125c8347-57c7-4097-9138-a84306cc21ef/volumes" Feb 27 19:17:35 crc kubenswrapper[4708]: I0227 19:17:35.631999 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:17:35 crc kubenswrapper[4708]: I0227 19:17:35.632599 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.149719 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29536998-xrq8h"] Feb 27 19:18:00 crc kubenswrapper[4708]: E0227 19:18:00.150589 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="registry-server" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.150602 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="registry-server" Feb 27 19:18:00 crc kubenswrapper[4708]: E0227 19:18:00.150624 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="extract-content" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.150630 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="extract-content" Feb 27 19:18:00 crc kubenswrapper[4708]: E0227 19:18:00.150661 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="extract-utilities" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.150668 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="extract-utilities" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.150863 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="125c8347-57c7-4097-9138-a84306cc21ef" containerName="registry-server" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.151608 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.154302 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.154344 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.154402 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.162743 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-xrq8h"] Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.296169 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxcn\" (UniqueName: \"kubernetes.io/projected/bb142a2f-83db-4bcd-bee8-82ee83b2bd0e-kube-api-access-llxcn\") pod \"auto-csr-approver-29536998-xrq8h\" (UID: \"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e\") " pod="openshift-infra/auto-csr-approver-29536998-xrq8h" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.399198 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxcn\" (UniqueName: \"kubernetes.io/projected/bb142a2f-83db-4bcd-bee8-82ee83b2bd0e-kube-api-access-llxcn\") pod \"auto-csr-approver-29536998-xrq8h\" (UID: \"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e\") " pod="openshift-infra/auto-csr-approver-29536998-xrq8h" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.424538 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxcn\" (UniqueName: \"kubernetes.io/projected/bb142a2f-83db-4bcd-bee8-82ee83b2bd0e-kube-api-access-llxcn\") pod \"auto-csr-approver-29536998-xrq8h\" (UID: \"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e\") " pod="openshift-infra/auto-csr-approver-29536998-xrq8h" Feb 27 19:18:00 crc kubenswrapper[4708]: I0227 19:18:00.506011 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" Feb 27 19:18:01 crc kubenswrapper[4708]: I0227 19:18:01.092539 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-xrq8h"] Feb 27 19:18:01 crc kubenswrapper[4708]: I0227 19:18:01.856893 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" event={"ID":"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e","Type":"ContainerStarted","Data":"3fcb2f2330f07537c6c6d8e320e9b9404822415711aeead38e044847636dda7b"} Feb 27 19:18:02 crc kubenswrapper[4708]: I0227 19:18:02.868031 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" event={"ID":"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e","Type":"ContainerStarted","Data":"adec1cfc3397747ef542a1e57a87ed8e9898b1f3ced9cffa5369f11dab6fd851"} Feb 27 19:18:03 crc kubenswrapper[4708]: I0227 19:18:03.885310 4708 generic.go:334] "Generic (PLEG): container finished" podID="bb142a2f-83db-4bcd-bee8-82ee83b2bd0e" containerID="adec1cfc3397747ef542a1e57a87ed8e9898b1f3ced9cffa5369f11dab6fd851" exitCode=0 Feb 27 19:18:03 crc kubenswrapper[4708]: I0227 19:18:03.885355 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" event={"ID":"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e","Type":"ContainerDied","Data":"adec1cfc3397747ef542a1e57a87ed8e9898b1f3ced9cffa5369f11dab6fd851"} Feb 27 19:18:04 crc kubenswrapper[4708]: I0227 19:18:04.390688 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" Feb 27 19:18:04 crc kubenswrapper[4708]: I0227 19:18:04.397352 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llxcn\" (UniqueName: \"kubernetes.io/projected/bb142a2f-83db-4bcd-bee8-82ee83b2bd0e-kube-api-access-llxcn\") pod \"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e\" (UID: \"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e\") " Feb 27 19:18:04 crc kubenswrapper[4708]: I0227 19:18:04.403951 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb142a2f-83db-4bcd-bee8-82ee83b2bd0e-kube-api-access-llxcn" (OuterVolumeSpecName: "kube-api-access-llxcn") pod "bb142a2f-83db-4bcd-bee8-82ee83b2bd0e" (UID: "bb142a2f-83db-4bcd-bee8-82ee83b2bd0e"). InnerVolumeSpecName "kube-api-access-llxcn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:18:04 crc kubenswrapper[4708]: I0227 19:18:04.500834 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llxcn\" (UniqueName: \"kubernetes.io/projected/bb142a2f-83db-4bcd-bee8-82ee83b2bd0e-kube-api-access-llxcn\") on node \"crc\" DevicePath \"\"" Feb 27 19:18:04 crc kubenswrapper[4708]: I0227 19:18:04.899059 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" event={"ID":"bb142a2f-83db-4bcd-bee8-82ee83b2bd0e","Type":"ContainerDied","Data":"3fcb2f2330f07537c6c6d8e320e9b9404822415711aeead38e044847636dda7b"} Feb 27 19:18:04 crc kubenswrapper[4708]: I0227 19:18:04.899104 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3fcb2f2330f07537c6c6d8e320e9b9404822415711aeead38e044847636dda7b" Feb 27 19:18:04 crc kubenswrapper[4708]: I0227 19:18:04.899170 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29536998-xrq8h" Feb 27 19:18:05 crc kubenswrapper[4708]: I0227 19:18:05.481517 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-nlhjs"] Feb 27 19:18:05 crc kubenswrapper[4708]: I0227 19:18:05.491556 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536992-nlhjs"] Feb 27 19:18:05 crc kubenswrapper[4708]: I0227 19:18:05.631432 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:18:05 crc kubenswrapper[4708]: I0227 19:18:05.631501 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:18:06 crc kubenswrapper[4708]: I0227 19:18:06.244065 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7fbac4f-74c3-423a-9466-731b20defbb5" path="/var/lib/kubelet/pods/e7fbac4f-74c3-423a-9466-731b20defbb5/volumes" Feb 27 19:18:09 crc kubenswrapper[4708]: I0227 19:18:09.706365 4708 scope.go:117] "RemoveContainer" containerID="883cb2348333a0e4ca895fef9d863464623e98338ecef04fba320b32eb4c4e1d" Feb 27 19:18:35 crc kubenswrapper[4708]: I0227 19:18:35.632028 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:18:35 crc kubenswrapper[4708]: I0227 19:18:35.633235 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:18:35 crc kubenswrapper[4708]: I0227 19:18:35.633288 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:18:35 crc kubenswrapper[4708]: I0227 19:18:35.634452 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:18:35 crc kubenswrapper[4708]: I0227 19:18:35.634508 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" gracePeriod=600 Feb 27 19:18:35 crc kubenswrapper[4708]: E0227 19:18:35.760476 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:18:36 crc kubenswrapper[4708]: I0227 19:18:36.252190 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" exitCode=0 Feb 27 19:18:36 crc kubenswrapper[4708]: I0227 19:18:36.252399 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f"} Feb 27 19:18:36 crc kubenswrapper[4708]: I0227 19:18:36.252514 4708 scope.go:117] "RemoveContainer" containerID="79dfd594e3849e72676ede36539df4697e5b55e75e1d7950e08908821cac878c" Feb 27 19:18:36 crc kubenswrapper[4708]: I0227 19:18:36.253378 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:18:36 crc kubenswrapper[4708]: E0227 19:18:36.253608 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:18:48 crc kubenswrapper[4708]: I0227 19:18:48.229505 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:18:48 crc kubenswrapper[4708]: E0227 19:18:48.230381 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:19:00 crc kubenswrapper[4708]: I0227 19:19:00.228620 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:19:00 crc kubenswrapper[4708]: E0227 19:19:00.229550 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:19:15 crc kubenswrapper[4708]: I0227 19:19:15.230231 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:19:15 crc kubenswrapper[4708]: E0227 19:19:15.231329 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:19:26 crc kubenswrapper[4708]: I0227 19:19:26.232305 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:19:26 crc kubenswrapper[4708]: E0227 19:19:26.233408 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:19:37 crc kubenswrapper[4708]: I0227 19:19:37.229407 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:19:37 crc kubenswrapper[4708]: E0227 19:19:37.230593 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:19:49 crc kubenswrapper[4708]: I0227 19:19:49.229520 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:19:49 crc kubenswrapper[4708]: E0227 19:19:49.230313 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.161036 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537000-fr8hb"] Feb 27 19:20:00 crc kubenswrapper[4708]: E0227 19:20:00.162792 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb142a2f-83db-4bcd-bee8-82ee83b2bd0e" containerName="oc" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.162891 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb142a2f-83db-4bcd-bee8-82ee83b2bd0e" containerName="oc" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.163153 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb142a2f-83db-4bcd-bee8-82ee83b2bd0e" containerName="oc" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.164485 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-fr8hb" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.167967 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.168059 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.168084 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.175343 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-fr8hb"] Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.202760 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4xz4\" (UniqueName: \"kubernetes.io/projected/d8606046-7a55-4729-b9fd-ced63bd8685e-kube-api-access-t4xz4\") pod \"auto-csr-approver-29537000-fr8hb\" (UID: \"d8606046-7a55-4729-b9fd-ced63bd8685e\") " pod="openshift-infra/auto-csr-approver-29537000-fr8hb" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.234954 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:20:00 crc kubenswrapper[4708]: E0227 19:20:00.235591 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.304804 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t4xz4\" (UniqueName: \"kubernetes.io/projected/d8606046-7a55-4729-b9fd-ced63bd8685e-kube-api-access-t4xz4\") pod \"auto-csr-approver-29537000-fr8hb\" (UID: \"d8606046-7a55-4729-b9fd-ced63bd8685e\") " pod="openshift-infra/auto-csr-approver-29537000-fr8hb" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.329336 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t4xz4\" (UniqueName: \"kubernetes.io/projected/d8606046-7a55-4729-b9fd-ced63bd8685e-kube-api-access-t4xz4\") pod \"auto-csr-approver-29537000-fr8hb\" (UID: \"d8606046-7a55-4729-b9fd-ced63bd8685e\") " pod="openshift-infra/auto-csr-approver-29537000-fr8hb" Feb 27 19:20:00 crc kubenswrapper[4708]: I0227 19:20:00.501259 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-fr8hb" Feb 27 19:20:01 crc kubenswrapper[4708]: I0227 19:20:01.076824 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:20:01 crc kubenswrapper[4708]: I0227 19:20:01.087098 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-fr8hb"] Feb 27 19:20:01 crc kubenswrapper[4708]: I0227 19:20:01.209785 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537000-fr8hb" event={"ID":"d8606046-7a55-4729-b9fd-ced63bd8685e","Type":"ContainerStarted","Data":"7c251dcfca1448f281ee36e25a37752a528ca227a3f6ca85106571e5bb07e43d"} Feb 27 19:20:03 crc kubenswrapper[4708]: E0227 19:20:03.004110 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8606046_7a55_4729_b9fd_ced63bd8685e.slice/crio-conmon-6db6c01b3373755b81683dbf9ad85c7930a954c9edef0ac07ac3e458762c0c74.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd8606046_7a55_4729_b9fd_ced63bd8685e.slice/crio-6db6c01b3373755b81683dbf9ad85c7930a954c9edef0ac07ac3e458762c0c74.scope\": RecentStats: unable to find data in memory cache]" Feb 27 19:20:03 crc kubenswrapper[4708]: I0227 19:20:03.230425 4708 generic.go:334] "Generic (PLEG): container finished" podID="d8606046-7a55-4729-b9fd-ced63bd8685e" containerID="6db6c01b3373755b81683dbf9ad85c7930a954c9edef0ac07ac3e458762c0c74" exitCode=0 Feb 27 19:20:03 crc kubenswrapper[4708]: I0227 19:20:03.230486 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537000-fr8hb" event={"ID":"d8606046-7a55-4729-b9fd-ced63bd8685e","Type":"ContainerDied","Data":"6db6c01b3373755b81683dbf9ad85c7930a954c9edef0ac07ac3e458762c0c74"} Feb 27 19:20:04 crc kubenswrapper[4708]: I0227 19:20:04.767198 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-fr8hb" Feb 27 19:20:04 crc kubenswrapper[4708]: I0227 19:20:04.809763 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t4xz4\" (UniqueName: \"kubernetes.io/projected/d8606046-7a55-4729-b9fd-ced63bd8685e-kube-api-access-t4xz4\") pod \"d8606046-7a55-4729-b9fd-ced63bd8685e\" (UID: \"d8606046-7a55-4729-b9fd-ced63bd8685e\") " Feb 27 19:20:04 crc kubenswrapper[4708]: I0227 19:20:04.822410 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8606046-7a55-4729-b9fd-ced63bd8685e-kube-api-access-t4xz4" (OuterVolumeSpecName: "kube-api-access-t4xz4") pod "d8606046-7a55-4729-b9fd-ced63bd8685e" (UID: "d8606046-7a55-4729-b9fd-ced63bd8685e"). InnerVolumeSpecName "kube-api-access-t4xz4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:20:04 crc kubenswrapper[4708]: I0227 19:20:04.912100 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t4xz4\" (UniqueName: \"kubernetes.io/projected/d8606046-7a55-4729-b9fd-ced63bd8685e-kube-api-access-t4xz4\") on node \"crc\" DevicePath \"\"" Feb 27 19:20:05 crc kubenswrapper[4708]: I0227 19:20:05.252426 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537000-fr8hb" event={"ID":"d8606046-7a55-4729-b9fd-ced63bd8685e","Type":"ContainerDied","Data":"7c251dcfca1448f281ee36e25a37752a528ca227a3f6ca85106571e5bb07e43d"} Feb 27 19:20:05 crc kubenswrapper[4708]: I0227 19:20:05.252463 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c251dcfca1448f281ee36e25a37752a528ca227a3f6ca85106571e5bb07e43d" Feb 27 19:20:05 crc kubenswrapper[4708]: I0227 19:20:05.252480 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537000-fr8hb" Feb 27 19:20:05 crc kubenswrapper[4708]: I0227 19:20:05.874819 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-8p9nr"] Feb 27 19:20:05 crc kubenswrapper[4708]: I0227 19:20:05.886534 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536994-8p9nr"] Feb 27 19:20:06 crc kubenswrapper[4708]: I0227 19:20:06.243446 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc148c18-3668-442e-857c-11ffe9cb0b1c" path="/var/lib/kubelet/pods/fc148c18-3668-442e-857c-11ffe9cb0b1c/volumes" Feb 27 19:20:09 crc kubenswrapper[4708]: I0227 19:20:09.817968 4708 scope.go:117] "RemoveContainer" containerID="4619e1d0bdcc18ed7b05cd83c159ed98e9bc83fbe1b3423981ebe76ab1d01bcd" Feb 27 19:20:15 crc kubenswrapper[4708]: I0227 19:20:15.228962 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:20:15 crc kubenswrapper[4708]: E0227 19:20:15.229719 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:20:30 crc kubenswrapper[4708]: I0227 19:20:30.229314 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:20:30 crc kubenswrapper[4708]: E0227 19:20:30.230383 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:20:45 crc kubenswrapper[4708]: I0227 19:20:45.228778 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:20:45 crc kubenswrapper[4708]: E0227 19:20:45.229458 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:21:00 crc kubenswrapper[4708]: I0227 19:21:00.228622 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:21:00 crc kubenswrapper[4708]: E0227 19:21:00.229897 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:21:14 crc kubenswrapper[4708]: I0227 19:21:14.229523 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:21:14 crc kubenswrapper[4708]: E0227 19:21:14.230336 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:21:28 crc kubenswrapper[4708]: I0227 19:21:28.229059 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:21:28 crc kubenswrapper[4708]: E0227 19:21:28.229929 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:21:41 crc kubenswrapper[4708]: I0227 19:21:41.229207 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:21:41 crc kubenswrapper[4708]: E0227 19:21:41.230123 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.133671 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 19:21:52 crc kubenswrapper[4708]: E0227 19:21:52.137055 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8606046-7a55-4729-b9fd-ced63bd8685e" containerName="oc" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.137208 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8606046-7a55-4729-b9fd-ced63bd8685e" containerName="oc" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.137560 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8606046-7a55-4729-b9fd-ced63bd8685e" containerName="oc" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.138585 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.141234 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.142912 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.143077 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.143540 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-958xs" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.173312 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.259202 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.259523 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.259671 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-config-data\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.259799 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.259980 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.260138 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t75ct\" (UniqueName: \"kubernetes.io/projected/a2707740-9be6-47c5-996c-43c292ad9758-kube-api-access-t75ct\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.260310 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.260438 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.260483 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363008 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363288 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363314 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-config-data\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363410 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363466 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363534 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t75ct\" (UniqueName: \"kubernetes.io/projected/a2707740-9be6-47c5-996c-43c292ad9758-kube-api-access-t75ct\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363600 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363671 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.363689 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.365525 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.366311 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.366328 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.367602 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-config-data\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.368321 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.374559 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.375700 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.379240 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.383586 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t75ct\" (UniqueName: \"kubernetes.io/projected/a2707740-9be6-47c5-996c-43c292ad9758-kube-api-access-t75ct\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.402040 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"tempest-tests-tempest\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " pod="openstack/tempest-tests-tempest" Feb 27 19:21:52 crc kubenswrapper[4708]: I0227 19:21:52.476670 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 19:21:53 crc kubenswrapper[4708]: I0227 19:21:53.135312 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Feb 27 19:21:53 crc kubenswrapper[4708]: I0227 19:21:53.658573 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a2707740-9be6-47c5-996c-43c292ad9758","Type":"ContainerStarted","Data":"a1a561dc32572ff6ac2d24747f3ff23d94717e6c85c0d9b4bc1e24f76f5ee6f9"} Feb 27 19:21:54 crc kubenswrapper[4708]: I0227 19:21:54.228626 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:21:54 crc kubenswrapper[4708]: E0227 19:21:54.229013 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.148299 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537002-7bp92"] Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.151930 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-7bp92" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.154008 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.154052 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.155116 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.161682 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-7bp92"] Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.256992 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk7tm\" (UniqueName: \"kubernetes.io/projected/04db75a1-9ca7-41c9-80f3-4152c45549ff-kube-api-access-mk7tm\") pod \"auto-csr-approver-29537002-7bp92\" (UID: \"04db75a1-9ca7-41c9-80f3-4152c45549ff\") " pod="openshift-infra/auto-csr-approver-29537002-7bp92" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.360458 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk7tm\" (UniqueName: \"kubernetes.io/projected/04db75a1-9ca7-41c9-80f3-4152c45549ff-kube-api-access-mk7tm\") pod \"auto-csr-approver-29537002-7bp92\" (UID: \"04db75a1-9ca7-41c9-80f3-4152c45549ff\") " pod="openshift-infra/auto-csr-approver-29537002-7bp92" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.380477 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk7tm\" (UniqueName: \"kubernetes.io/projected/04db75a1-9ca7-41c9-80f3-4152c45549ff-kube-api-access-mk7tm\") pod \"auto-csr-approver-29537002-7bp92\" (UID: \"04db75a1-9ca7-41c9-80f3-4152c45549ff\") " pod="openshift-infra/auto-csr-approver-29537002-7bp92" Feb 27 19:22:00 crc kubenswrapper[4708]: I0227 19:22:00.477535 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-7bp92" Feb 27 19:22:01 crc kubenswrapper[4708]: I0227 19:22:01.083978 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-7bp92"] Feb 27 19:22:01 crc kubenswrapper[4708]: W0227 19:22:01.112117 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod04db75a1_9ca7_41c9_80f3_4152c45549ff.slice/crio-0b50ff56ff8c76ebff0e9dc4f78bba49d3c42841cc802eb1f9acc8f9ebe481be WatchSource:0}: Error finding container 0b50ff56ff8c76ebff0e9dc4f78bba49d3c42841cc802eb1f9acc8f9ebe481be: Status 404 returned error can't find the container with id 0b50ff56ff8c76ebff0e9dc4f78bba49d3c42841cc802eb1f9acc8f9ebe481be Feb 27 19:22:01 crc kubenswrapper[4708]: I0227 19:22:01.740767 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537002-7bp92" event={"ID":"04db75a1-9ca7-41c9-80f3-4152c45549ff","Type":"ContainerStarted","Data":"0b50ff56ff8c76ebff0e9dc4f78bba49d3c42841cc802eb1f9acc8f9ebe481be"} Feb 27 19:22:02 crc kubenswrapper[4708]: I0227 19:22:02.754053 4708 generic.go:334] "Generic (PLEG): container finished" podID="04db75a1-9ca7-41c9-80f3-4152c45549ff" containerID="6d911fdc132b3dc3f738c4e0977aaedaab46bc5575e7e804eeb68ea50c7933f2" exitCode=0 Feb 27 19:22:02 crc kubenswrapper[4708]: I0227 19:22:02.754172 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537002-7bp92" event={"ID":"04db75a1-9ca7-41c9-80f3-4152c45549ff","Type":"ContainerDied","Data":"6d911fdc132b3dc3f738c4e0977aaedaab46bc5575e7e804eeb68ea50c7933f2"} Feb 27 19:22:04 crc kubenswrapper[4708]: I0227 19:22:04.234418 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-7bp92" Feb 27 19:22:04 crc kubenswrapper[4708]: I0227 19:22:04.351565 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk7tm\" (UniqueName: \"kubernetes.io/projected/04db75a1-9ca7-41c9-80f3-4152c45549ff-kube-api-access-mk7tm\") pod \"04db75a1-9ca7-41c9-80f3-4152c45549ff\" (UID: \"04db75a1-9ca7-41c9-80f3-4152c45549ff\") " Feb 27 19:22:04 crc kubenswrapper[4708]: I0227 19:22:04.356662 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04db75a1-9ca7-41c9-80f3-4152c45549ff-kube-api-access-mk7tm" (OuterVolumeSpecName: "kube-api-access-mk7tm") pod "04db75a1-9ca7-41c9-80f3-4152c45549ff" (UID: "04db75a1-9ca7-41c9-80f3-4152c45549ff"). InnerVolumeSpecName "kube-api-access-mk7tm". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:22:04 crc kubenswrapper[4708]: I0227 19:22:04.453907 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk7tm\" (UniqueName: \"kubernetes.io/projected/04db75a1-9ca7-41c9-80f3-4152c45549ff-kube-api-access-mk7tm\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:04 crc kubenswrapper[4708]: I0227 19:22:04.796346 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537002-7bp92" event={"ID":"04db75a1-9ca7-41c9-80f3-4152c45549ff","Type":"ContainerDied","Data":"0b50ff56ff8c76ebff0e9dc4f78bba49d3c42841cc802eb1f9acc8f9ebe481be"} Feb 27 19:22:04 crc kubenswrapper[4708]: I0227 19:22:04.796604 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0b50ff56ff8c76ebff0e9dc4f78bba49d3c42841cc802eb1f9acc8f9ebe481be" Feb 27 19:22:04 crc kubenswrapper[4708]: I0227 19:22:04.796389 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537002-7bp92" Feb 27 19:22:05 crc kubenswrapper[4708]: I0227 19:22:05.309171 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-pqzsw"] Feb 27 19:22:05 crc kubenswrapper[4708]: I0227 19:22:05.318419 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536996-pqzsw"] Feb 27 19:22:06 crc kubenswrapper[4708]: I0227 19:22:06.243405 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5481f469-116a-4cf1-a9a5-396010496da0" path="/var/lib/kubelet/pods/5481f469-116a-4cf1-a9a5-396010496da0/volumes" Feb 27 19:22:09 crc kubenswrapper[4708]: I0227 19:22:09.229225 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:22:09 crc kubenswrapper[4708]: E0227 19:22:09.229876 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:22:10 crc kubenswrapper[4708]: I0227 19:22:10.003534 4708 scope.go:117] "RemoveContainer" containerID="980a3ac0984d114aeeebdaca046b073f7c924af88f8eeedfb7d8bd22f0df0b4f" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.646640 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-62jdz"] Feb 27 19:22:11 crc kubenswrapper[4708]: E0227 19:22:11.647176 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04db75a1-9ca7-41c9-80f3-4152c45549ff" containerName="oc" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.647194 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="04db75a1-9ca7-41c9-80f3-4152c45549ff" containerName="oc" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.647472 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="04db75a1-9ca7-41c9-80f3-4152c45549ff" containerName="oc" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.649678 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.674226 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62jdz"] Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.726733 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t76b9\" (UniqueName: \"kubernetes.io/projected/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-kube-api-access-t76b9\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.726808 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-utilities\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.726947 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-catalog-content\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.835245 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-catalog-content\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.835446 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t76b9\" (UniqueName: \"kubernetes.io/projected/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-kube-api-access-t76b9\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.835482 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-utilities\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.836156 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-utilities\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.836385 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-catalog-content\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.868920 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t76b9\" (UniqueName: \"kubernetes.io/projected/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-kube-api-access-t76b9\") pod \"certified-operators-62jdz\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:11 crc kubenswrapper[4708]: I0227 19:22:11.977554 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:21 crc kubenswrapper[4708]: I0227 19:22:21.228263 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:22:21 crc kubenswrapper[4708]: E0227 19:22:21.230661 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:22:32 crc kubenswrapper[4708]: I0227 19:22:32.237485 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:22:32 crc kubenswrapper[4708]: E0227 19:22:32.238354 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:22:32 crc kubenswrapper[4708]: E0227 19:22:32.276283 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Feb 27 19:22:32 crc kubenswrapper[4708]: E0227 19:22:32.276507 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t75ct,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(a2707740-9be6-47c5-996c-43c292ad9758): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:22:32 crc kubenswrapper[4708]: E0227 19:22:32.277843 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="a2707740-9be6-47c5-996c-43c292ad9758" Feb 27 19:22:32 crc kubenswrapper[4708]: I0227 19:22:32.894661 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-62jdz"] Feb 27 19:22:32 crc kubenswrapper[4708]: W0227 19:22:32.922114 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode783f44c_f4b8_48e4_b008_5adbcf3ffa8a.slice/crio-e4abe67d3bc48de1f0b97a5d6080abb9469a1ae7efd87cbba97dec9b82a53b53 WatchSource:0}: Error finding container e4abe67d3bc48de1f0b97a5d6080abb9469a1ae7efd87cbba97dec9b82a53b53: Status 404 returned error can't find the container with id e4abe67d3bc48de1f0b97a5d6080abb9469a1ae7efd87cbba97dec9b82a53b53 Feb 27 19:22:33 crc kubenswrapper[4708]: I0227 19:22:33.117248 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62jdz" event={"ID":"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a","Type":"ContainerStarted","Data":"e4abe67d3bc48de1f0b97a5d6080abb9469a1ae7efd87cbba97dec9b82a53b53"} Feb 27 19:22:33 crc kubenswrapper[4708]: E0227 19:22:33.118446 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="a2707740-9be6-47c5-996c-43c292ad9758" Feb 27 19:22:34 crc kubenswrapper[4708]: I0227 19:22:34.127548 4708 generic.go:334] "Generic (PLEG): container finished" podID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerID="caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4" exitCode=0 Feb 27 19:22:34 crc kubenswrapper[4708]: I0227 19:22:34.127598 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62jdz" event={"ID":"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a","Type":"ContainerDied","Data":"caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4"} Feb 27 19:22:35 crc kubenswrapper[4708]: I0227 19:22:35.141454 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62jdz" event={"ID":"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a","Type":"ContainerStarted","Data":"00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d"} Feb 27 19:22:36 crc kubenswrapper[4708]: I0227 19:22:36.155767 4708 generic.go:334] "Generic (PLEG): container finished" podID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerID="00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d" exitCode=0 Feb 27 19:22:36 crc kubenswrapper[4708]: I0227 19:22:36.155832 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62jdz" event={"ID":"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a","Type":"ContainerDied","Data":"00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d"} Feb 27 19:22:37 crc kubenswrapper[4708]: I0227 19:22:37.169400 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62jdz" event={"ID":"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a","Type":"ContainerStarted","Data":"fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107"} Feb 27 19:22:37 crc kubenswrapper[4708]: I0227 19:22:37.193494 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-62jdz" podStartSLOduration=23.716821046 podStartE2EDuration="26.193475579s" podCreationTimestamp="2026-02-27 19:22:11 +0000 UTC" firstStartedPulling="2026-02-27 19:22:34.129389747 +0000 UTC m=+8952.645187334" lastFinishedPulling="2026-02-27 19:22:36.60604429 +0000 UTC m=+8955.121841867" observedRunningTime="2026-02-27 19:22:37.18963231 +0000 UTC m=+8955.705429897" watchObservedRunningTime="2026-02-27 19:22:37.193475579 +0000 UTC m=+8955.709273166" Feb 27 19:22:41 crc kubenswrapper[4708]: I0227 19:22:41.978622 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:41 crc kubenswrapper[4708]: I0227 19:22:41.979342 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:42 crc kubenswrapper[4708]: I0227 19:22:42.061507 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:42 crc kubenswrapper[4708]: I0227 19:22:42.306917 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:42 crc kubenswrapper[4708]: I0227 19:22:42.840717 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62jdz"] Feb 27 19:22:44 crc kubenswrapper[4708]: I0227 19:22:44.719576 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-62jdz" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="registry-server" containerID="cri-o://fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107" gracePeriod=2 Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.228911 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:22:45 crc kubenswrapper[4708]: E0227 19:22:45.229481 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.427061 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.604030 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t76b9\" (UniqueName: \"kubernetes.io/projected/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-kube-api-access-t76b9\") pod \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.604237 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-catalog-content\") pod \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.604317 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-utilities\") pod \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\" (UID: \"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a\") " Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.605940 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-utilities" (OuterVolumeSpecName: "utilities") pod "e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" (UID: "e783f44c-f4b8-48e4-b008-5adbcf3ffa8a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.609060 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-kube-api-access-t76b9" (OuterVolumeSpecName: "kube-api-access-t76b9") pod "e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" (UID: "e783f44c-f4b8-48e4-b008-5adbcf3ffa8a"). InnerVolumeSpecName "kube-api-access-t76b9". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.678646 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" (UID: "e783f44c-f4b8-48e4-b008-5adbcf3ffa8a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.706415 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.706451 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t76b9\" (UniqueName: \"kubernetes.io/projected/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-kube-api-access-t76b9\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.706462 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.734300 4708 generic.go:334] "Generic (PLEG): container finished" podID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerID="fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107" exitCode=0 Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.734341 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62jdz" event={"ID":"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a","Type":"ContainerDied","Data":"fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107"} Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.734366 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-62jdz" event={"ID":"e783f44c-f4b8-48e4-b008-5adbcf3ffa8a","Type":"ContainerDied","Data":"e4abe67d3bc48de1f0b97a5d6080abb9469a1ae7efd87cbba97dec9b82a53b53"} Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.734383 4708 scope.go:117] "RemoveContainer" containerID="fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.734441 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-62jdz" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.759943 4708 scope.go:117] "RemoveContainer" containerID="00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.772442 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-62jdz"] Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.799955 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-62jdz"] Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.802098 4708 scope.go:117] "RemoveContainer" containerID="caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.858153 4708 scope.go:117] "RemoveContainer" containerID="fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107" Feb 27 19:22:45 crc kubenswrapper[4708]: E0227 19:22:45.858577 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107\": container with ID starting with fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107 not found: ID does not exist" containerID="fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.858639 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107"} err="failed to get container status \"fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107\": rpc error: code = NotFound desc = could not find container \"fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107\": container with ID starting with fca43234eec839e47c9d0b844c913c3e592abb338d62553e7f501b6ba4256107 not found: ID does not exist" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.858667 4708 scope.go:117] "RemoveContainer" containerID="00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d" Feb 27 19:22:45 crc kubenswrapper[4708]: E0227 19:22:45.859140 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d\": container with ID starting with 00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d not found: ID does not exist" containerID="00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.859189 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d"} err="failed to get container status \"00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d\": rpc error: code = NotFound desc = could not find container \"00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d\": container with ID starting with 00d3c899dad498c9ead5c17aa4ba1791eacc39c75d96fb1953503d36d265f15d not found: ID does not exist" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.859222 4708 scope.go:117] "RemoveContainer" containerID="caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4" Feb 27 19:22:45 crc kubenswrapper[4708]: E0227 19:22:45.859538 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4\": container with ID starting with caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4 not found: ID does not exist" containerID="caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4" Feb 27 19:22:45 crc kubenswrapper[4708]: I0227 19:22:45.859585 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4"} err="failed to get container status \"caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4\": rpc error: code = NotFound desc = could not find container \"caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4\": container with ID starting with caf69d09efdd8679212428cf74ed4eff0f58c94b2f7f2a6d89f9a286e60745e4 not found: ID does not exist" Feb 27 19:22:46 crc kubenswrapper[4708]: I0227 19:22:46.244992 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" path="/var/lib/kubelet/pods/e783f44c-f4b8-48e4-b008-5adbcf3ffa8a/volumes" Feb 27 19:22:46 crc kubenswrapper[4708]: I0227 19:22:46.757504 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Feb 27 19:22:48 crc kubenswrapper[4708]: I0227 19:22:48.771985 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a2707740-9be6-47c5-996c-43c292ad9758","Type":"ContainerStarted","Data":"fc266ff828e5db3ba425b016b81194c5ce306911feca30e154d9073aa05ac365"} Feb 27 19:22:48 crc kubenswrapper[4708]: I0227 19:22:48.797610 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=4.190923876 podStartE2EDuration="57.797586826s" podCreationTimestamp="2026-02-27 19:21:51 +0000 UTC" firstStartedPulling="2026-02-27 19:21:53.147141503 +0000 UTC m=+8911.662939080" lastFinishedPulling="2026-02-27 19:22:46.753804423 +0000 UTC m=+8965.269602030" observedRunningTime="2026-02-27 19:22:48.787836861 +0000 UTC m=+8967.303634448" watchObservedRunningTime="2026-02-27 19:22:48.797586826 +0000 UTC m=+8967.313384413" Feb 27 19:22:57 crc kubenswrapper[4708]: I0227 19:22:57.228890 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:22:57 crc kubenswrapper[4708]: E0227 19:22:57.229480 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:23:11 crc kubenswrapper[4708]: I0227 19:23:11.229061 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:23:11 crc kubenswrapper[4708]: E0227 19:23:11.229920 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:23:26 crc kubenswrapper[4708]: I0227 19:23:26.229918 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:23:26 crc kubenswrapper[4708]: E0227 19:23:26.232431 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:23:37 crc kubenswrapper[4708]: I0227 19:23:37.229063 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:23:38 crc kubenswrapper[4708]: I0227 19:23:38.301148 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"ffc2e38e46d1828467c934992ae971675459062995acbb761f6a672671c2fe7a"} Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.191667 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537004-7fhss"] Feb 27 19:24:00 crc kubenswrapper[4708]: E0227 19:24:00.192778 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="extract-utilities" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.192797 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="extract-utilities" Feb 27 19:24:00 crc kubenswrapper[4708]: E0227 19:24:00.192810 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="extract-content" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.192819 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="extract-content" Feb 27 19:24:00 crc kubenswrapper[4708]: E0227 19:24:00.192836 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="registry-server" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.192844 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="registry-server" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.193114 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="e783f44c-f4b8-48e4-b008-5adbcf3ffa8a" containerName="registry-server" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.194123 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-7fhss" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.198234 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.198283 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.199570 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.203981 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-7fhss"] Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.291640 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6dd8\" (UniqueName: \"kubernetes.io/projected/4b796b61-9ca5-4888-8285-b246e1e6fc4c-kube-api-access-f6dd8\") pod \"auto-csr-approver-29537004-7fhss\" (UID: \"4b796b61-9ca5-4888-8285-b246e1e6fc4c\") " pod="openshift-infra/auto-csr-approver-29537004-7fhss" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.393948 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6dd8\" (UniqueName: \"kubernetes.io/projected/4b796b61-9ca5-4888-8285-b246e1e6fc4c-kube-api-access-f6dd8\") pod \"auto-csr-approver-29537004-7fhss\" (UID: \"4b796b61-9ca5-4888-8285-b246e1e6fc4c\") " pod="openshift-infra/auto-csr-approver-29537004-7fhss" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.410728 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6dd8\" (UniqueName: \"kubernetes.io/projected/4b796b61-9ca5-4888-8285-b246e1e6fc4c-kube-api-access-f6dd8\") pod \"auto-csr-approver-29537004-7fhss\" (UID: \"4b796b61-9ca5-4888-8285-b246e1e6fc4c\") " pod="openshift-infra/auto-csr-approver-29537004-7fhss" Feb 27 19:24:00 crc kubenswrapper[4708]: I0227 19:24:00.514789 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-7fhss" Feb 27 19:24:01 crc kubenswrapper[4708]: I0227 19:24:01.041747 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-7fhss"] Feb 27 19:24:01 crc kubenswrapper[4708]: W0227 19:24:01.045492 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b796b61_9ca5_4888_8285_b246e1e6fc4c.slice/crio-5c824b9e4c2bbe760b3a76317ec0826a1a90912ec1463802e13034e0093e6079 WatchSource:0}: Error finding container 5c824b9e4c2bbe760b3a76317ec0826a1a90912ec1463802e13034e0093e6079: Status 404 returned error can't find the container with id 5c824b9e4c2bbe760b3a76317ec0826a1a90912ec1463802e13034e0093e6079 Feb 27 19:24:01 crc kubenswrapper[4708]: I0227 19:24:01.610601 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537004-7fhss" event={"ID":"4b796b61-9ca5-4888-8285-b246e1e6fc4c","Type":"ContainerStarted","Data":"5c824b9e4c2bbe760b3a76317ec0826a1a90912ec1463802e13034e0093e6079"} Feb 27 19:24:02 crc kubenswrapper[4708]: I0227 19:24:02.621521 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537004-7fhss" event={"ID":"4b796b61-9ca5-4888-8285-b246e1e6fc4c","Type":"ContainerStarted","Data":"c79b4ed1fe4777a4b0ef25111dea470667d9571a0ed51c8eef0bd0a106026ea8"} Feb 27 19:24:02 crc kubenswrapper[4708]: I0227 19:24:02.640270 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537004-7fhss" podStartSLOduration=1.490685739 podStartE2EDuration="2.640245433s" podCreationTimestamp="2026-02-27 19:24:00 +0000 UTC" firstStartedPulling="2026-02-27 19:24:01.050944767 +0000 UTC m=+9039.566742354" lastFinishedPulling="2026-02-27 19:24:02.200504461 +0000 UTC m=+9040.716302048" observedRunningTime="2026-02-27 19:24:02.638164115 +0000 UTC m=+9041.153961702" watchObservedRunningTime="2026-02-27 19:24:02.640245433 +0000 UTC m=+9041.156043020" Feb 27 19:24:03 crc kubenswrapper[4708]: I0227 19:24:03.632041 4708 generic.go:334] "Generic (PLEG): container finished" podID="4b796b61-9ca5-4888-8285-b246e1e6fc4c" containerID="c79b4ed1fe4777a4b0ef25111dea470667d9571a0ed51c8eef0bd0a106026ea8" exitCode=0 Feb 27 19:24:03 crc kubenswrapper[4708]: I0227 19:24:03.632084 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537004-7fhss" event={"ID":"4b796b61-9ca5-4888-8285-b246e1e6fc4c","Type":"ContainerDied","Data":"c79b4ed1fe4777a4b0ef25111dea470667d9571a0ed51c8eef0bd0a106026ea8"} Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.195736 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-7fhss" Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.300734 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6dd8\" (UniqueName: \"kubernetes.io/projected/4b796b61-9ca5-4888-8285-b246e1e6fc4c-kube-api-access-f6dd8\") pod \"4b796b61-9ca5-4888-8285-b246e1e6fc4c\" (UID: \"4b796b61-9ca5-4888-8285-b246e1e6fc4c\") " Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.308685 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b796b61-9ca5-4888-8285-b246e1e6fc4c-kube-api-access-f6dd8" (OuterVolumeSpecName: "kube-api-access-f6dd8") pod "4b796b61-9ca5-4888-8285-b246e1e6fc4c" (UID: "4b796b61-9ca5-4888-8285-b246e1e6fc4c"). InnerVolumeSpecName "kube-api-access-f6dd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.336373 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-xrq8h"] Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.346319 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29536998-xrq8h"] Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.403335 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6dd8\" (UniqueName: \"kubernetes.io/projected/4b796b61-9ca5-4888-8285-b246e1e6fc4c-kube-api-access-f6dd8\") on node \"crc\" DevicePath \"\"" Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.651501 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537004-7fhss" event={"ID":"4b796b61-9ca5-4888-8285-b246e1e6fc4c","Type":"ContainerDied","Data":"5c824b9e4c2bbe760b3a76317ec0826a1a90912ec1463802e13034e0093e6079"} Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.651774 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c824b9e4c2bbe760b3a76317ec0826a1a90912ec1463802e13034e0093e6079" Feb 27 19:24:05 crc kubenswrapper[4708]: I0227 19:24:05.651710 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537004-7fhss" Feb 27 19:24:06 crc kubenswrapper[4708]: I0227 19:24:06.255079 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb142a2f-83db-4bcd-bee8-82ee83b2bd0e" path="/var/lib/kubelet/pods/bb142a2f-83db-4bcd-bee8-82ee83b2bd0e/volumes" Feb 27 19:24:32 crc kubenswrapper[4708]: I0227 19:24:32.307355 4708 scope.go:117] "RemoveContainer" containerID="adec1cfc3397747ef542a1e57a87ed8e9898b1f3ced9cffa5369f11dab6fd851" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.186685 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537006-wsz82"] Feb 27 19:26:00 crc kubenswrapper[4708]: E0227 19:26:00.188325 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b796b61-9ca5-4888-8285-b246e1e6fc4c" containerName="oc" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.188348 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b796b61-9ca5-4888-8285-b246e1e6fc4c" containerName="oc" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.189279 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b796b61-9ca5-4888-8285-b246e1e6fc4c" containerName="oc" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.191563 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-wsz82" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.195741 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.195930 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.196017 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.249865 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-wsz82"] Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.400681 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whlxx\" (UniqueName: \"kubernetes.io/projected/03b7dcf3-b9de-4111-bfdf-c872d8f34b03-kube-api-access-whlxx\") pod \"auto-csr-approver-29537006-wsz82\" (UID: \"03b7dcf3-b9de-4111-bfdf-c872d8f34b03\") " pod="openshift-infra/auto-csr-approver-29537006-wsz82" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.503019 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whlxx\" (UniqueName: \"kubernetes.io/projected/03b7dcf3-b9de-4111-bfdf-c872d8f34b03-kube-api-access-whlxx\") pod \"auto-csr-approver-29537006-wsz82\" (UID: \"03b7dcf3-b9de-4111-bfdf-c872d8f34b03\") " pod="openshift-infra/auto-csr-approver-29537006-wsz82" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.540455 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whlxx\" (UniqueName: \"kubernetes.io/projected/03b7dcf3-b9de-4111-bfdf-c872d8f34b03-kube-api-access-whlxx\") pod \"auto-csr-approver-29537006-wsz82\" (UID: \"03b7dcf3-b9de-4111-bfdf-c872d8f34b03\") " pod="openshift-infra/auto-csr-approver-29537006-wsz82" Feb 27 19:26:00 crc kubenswrapper[4708]: I0227 19:26:00.549254 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-wsz82" Feb 27 19:26:01 crc kubenswrapper[4708]: I0227 19:26:01.047412 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-wsz82"] Feb 27 19:26:01 crc kubenswrapper[4708]: I0227 19:26:01.049594 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:26:01 crc kubenswrapper[4708]: I0227 19:26:01.977783 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537006-wsz82" event={"ID":"03b7dcf3-b9de-4111-bfdf-c872d8f34b03","Type":"ContainerStarted","Data":"0c7f330c7cad4cda568a08824851ae43456bb8080b3214905ee6e7302534aaad"} Feb 27 19:26:02 crc kubenswrapper[4708]: I0227 19:26:02.989952 4708 generic.go:334] "Generic (PLEG): container finished" podID="03b7dcf3-b9de-4111-bfdf-c872d8f34b03" containerID="638ac760b601ba05990116642e22b573ebf37e07030fd57447bcd772ece69c08" exitCode=0 Feb 27 19:26:02 crc kubenswrapper[4708]: I0227 19:26:02.990063 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537006-wsz82" event={"ID":"03b7dcf3-b9de-4111-bfdf-c872d8f34b03","Type":"ContainerDied","Data":"638ac760b601ba05990116642e22b573ebf37e07030fd57447bcd772ece69c08"} Feb 27 19:26:04 crc kubenswrapper[4708]: I0227 19:26:04.601698 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-wsz82" Feb 27 19:26:04 crc kubenswrapper[4708]: I0227 19:26:04.706181 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whlxx\" (UniqueName: \"kubernetes.io/projected/03b7dcf3-b9de-4111-bfdf-c872d8f34b03-kube-api-access-whlxx\") pod \"03b7dcf3-b9de-4111-bfdf-c872d8f34b03\" (UID: \"03b7dcf3-b9de-4111-bfdf-c872d8f34b03\") " Feb 27 19:26:04 crc kubenswrapper[4708]: I0227 19:26:04.712978 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03b7dcf3-b9de-4111-bfdf-c872d8f34b03-kube-api-access-whlxx" (OuterVolumeSpecName: "kube-api-access-whlxx") pod "03b7dcf3-b9de-4111-bfdf-c872d8f34b03" (UID: "03b7dcf3-b9de-4111-bfdf-c872d8f34b03"). InnerVolumeSpecName "kube-api-access-whlxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:26:04 crc kubenswrapper[4708]: I0227 19:26:04.809600 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whlxx\" (UniqueName: \"kubernetes.io/projected/03b7dcf3-b9de-4111-bfdf-c872d8f34b03-kube-api-access-whlxx\") on node \"crc\" DevicePath \"\"" Feb 27 19:26:05 crc kubenswrapper[4708]: I0227 19:26:05.008489 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537006-wsz82" event={"ID":"03b7dcf3-b9de-4111-bfdf-c872d8f34b03","Type":"ContainerDied","Data":"0c7f330c7cad4cda568a08824851ae43456bb8080b3214905ee6e7302534aaad"} Feb 27 19:26:05 crc kubenswrapper[4708]: I0227 19:26:05.008525 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c7f330c7cad4cda568a08824851ae43456bb8080b3214905ee6e7302534aaad" Feb 27 19:26:05 crc kubenswrapper[4708]: I0227 19:26:05.008576 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537006-wsz82" Feb 27 19:26:05 crc kubenswrapper[4708]: I0227 19:26:05.631172 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:26:05 crc kubenswrapper[4708]: I0227 19:26:05.631975 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:26:05 crc kubenswrapper[4708]: I0227 19:26:05.669685 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-fr8hb"] Feb 27 19:26:05 crc kubenswrapper[4708]: I0227 19:26:05.679597 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537000-fr8hb"] Feb 27 19:26:06 crc kubenswrapper[4708]: I0227 19:26:06.240519 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8606046-7a55-4729-b9fd-ced63bd8685e" path="/var/lib/kubelet/pods/d8606046-7a55-4729-b9fd-ced63bd8685e/volumes" Feb 27 19:26:32 crc kubenswrapper[4708]: I0227 19:26:32.422119 4708 scope.go:117] "RemoveContainer" containerID="6db6c01b3373755b81683dbf9ad85c7930a954c9edef0ac07ac3e458762c0c74" Feb 27 19:26:35 crc kubenswrapper[4708]: I0227 19:26:35.631350 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:26:35 crc kubenswrapper[4708]: I0227 19:26:35.631966 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:27:05 crc kubenswrapper[4708]: I0227 19:27:05.631076 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:27:05 crc kubenswrapper[4708]: I0227 19:27:05.631622 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:27:05 crc kubenswrapper[4708]: I0227 19:27:05.631664 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:27:05 crc kubenswrapper[4708]: I0227 19:27:05.632668 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ffc2e38e46d1828467c934992ae971675459062995acbb761f6a672671c2fe7a"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:27:05 crc kubenswrapper[4708]: I0227 19:27:05.632744 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://ffc2e38e46d1828467c934992ae971675459062995acbb761f6a672671c2fe7a" gracePeriod=600 Feb 27 19:27:06 crc kubenswrapper[4708]: I0227 19:27:06.649569 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="ffc2e38e46d1828467c934992ae971675459062995acbb761f6a672671c2fe7a" exitCode=0 Feb 27 19:27:06 crc kubenswrapper[4708]: I0227 19:27:06.649611 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"ffc2e38e46d1828467c934992ae971675459062995acbb761f6a672671c2fe7a"} Feb 27 19:27:06 crc kubenswrapper[4708]: I0227 19:27:06.650251 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4"} Feb 27 19:27:06 crc kubenswrapper[4708]: I0227 19:27:06.650294 4708 scope.go:117] "RemoveContainer" containerID="3de164ec7b24a63709feef6a0d94a85e71287f3ffdf85f2d49da5aeb9815644f" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.106580 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wwcz9"] Feb 27 19:27:30 crc kubenswrapper[4708]: E0227 19:27:30.107520 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03b7dcf3-b9de-4111-bfdf-c872d8f34b03" containerName="oc" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.107533 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="03b7dcf3-b9de-4111-bfdf-c872d8f34b03" containerName="oc" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.107758 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="03b7dcf3-b9de-4111-bfdf-c872d8f34b03" containerName="oc" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.109399 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.117263 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwcz9"] Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.202765 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-utilities\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.202915 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjxzn\" (UniqueName: \"kubernetes.io/projected/903d9c3a-00b7-4b18-9446-aaf55c9986ba-kube-api-access-sjxzn\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.203009 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-catalog-content\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.305333 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjxzn\" (UniqueName: \"kubernetes.io/projected/903d9c3a-00b7-4b18-9446-aaf55c9986ba-kube-api-access-sjxzn\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.305388 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-catalog-content\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.305571 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-utilities\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.306052 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-catalog-content\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.306099 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-utilities\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.326668 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjxzn\" (UniqueName: \"kubernetes.io/projected/903d9c3a-00b7-4b18-9446-aaf55c9986ba-kube-api-access-sjxzn\") pod \"redhat-marketplace-wwcz9\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.459542 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:30 crc kubenswrapper[4708]: I0227 19:27:30.981413 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwcz9"] Feb 27 19:27:31 crc kubenswrapper[4708]: I0227 19:27:31.878333 4708 generic.go:334] "Generic (PLEG): container finished" podID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerID="42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe" exitCode=0 Feb 27 19:27:31 crc kubenswrapper[4708]: I0227 19:27:31.878393 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwcz9" event={"ID":"903d9c3a-00b7-4b18-9446-aaf55c9986ba","Type":"ContainerDied","Data":"42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe"} Feb 27 19:27:31 crc kubenswrapper[4708]: I0227 19:27:31.878884 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwcz9" event={"ID":"903d9c3a-00b7-4b18-9446-aaf55c9986ba","Type":"ContainerStarted","Data":"533b2e923bddf94258466ee1b1fa0cf1e5918c77737e4bc77e16e85cb12ef695"} Feb 27 19:27:33 crc kubenswrapper[4708]: I0227 19:27:33.902503 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwcz9" event={"ID":"903d9c3a-00b7-4b18-9446-aaf55c9986ba","Type":"ContainerStarted","Data":"b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5"} Feb 27 19:27:35 crc kubenswrapper[4708]: I0227 19:27:35.923425 4708 generic.go:334] "Generic (PLEG): container finished" podID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerID="b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5" exitCode=0 Feb 27 19:27:35 crc kubenswrapper[4708]: I0227 19:27:35.923526 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwcz9" event={"ID":"903d9c3a-00b7-4b18-9446-aaf55c9986ba","Type":"ContainerDied","Data":"b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5"} Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.238019 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-n9wf4"] Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.240495 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.257295 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9wf4"] Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.369537 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-catalog-content\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.369611 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-utilities\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.370246 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjznr\" (UniqueName: \"kubernetes.io/projected/77b826c6-78ac-45a7-8812-493bc663e39e-kube-api-access-tjznr\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.471994 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjznr\" (UniqueName: \"kubernetes.io/projected/77b826c6-78ac-45a7-8812-493bc663e39e-kube-api-access-tjznr\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.472103 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-catalog-content\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.472131 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-utilities\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.472659 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-catalog-content\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.472692 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-utilities\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.490622 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjznr\" (UniqueName: \"kubernetes.io/projected/77b826c6-78ac-45a7-8812-493bc663e39e-kube-api-access-tjznr\") pod \"redhat-operators-n9wf4\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:37 crc kubenswrapper[4708]: I0227 19:27:37.573562 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:27:38 crc kubenswrapper[4708]: I0227 19:27:38.159034 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-n9wf4"] Feb 27 19:27:38 crc kubenswrapper[4708]: I0227 19:27:38.984019 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwcz9" event={"ID":"903d9c3a-00b7-4b18-9446-aaf55c9986ba","Type":"ContainerStarted","Data":"b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27"} Feb 27 19:27:39 crc kubenswrapper[4708]: I0227 19:27:39.000626 4708 generic.go:334] "Generic (PLEG): container finished" podID="77b826c6-78ac-45a7-8812-493bc663e39e" containerID="61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073" exitCode=0 Feb 27 19:27:39 crc kubenswrapper[4708]: I0227 19:27:39.000668 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9wf4" event={"ID":"77b826c6-78ac-45a7-8812-493bc663e39e","Type":"ContainerDied","Data":"61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073"} Feb 27 19:27:39 crc kubenswrapper[4708]: I0227 19:27:39.000693 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9wf4" event={"ID":"77b826c6-78ac-45a7-8812-493bc663e39e","Type":"ContainerStarted","Data":"53974ffb9dc501cbe746281fe54d442eb1c7255d563aca50b0715ace10cb2e09"} Feb 27 19:27:39 crc kubenswrapper[4708]: I0227 19:27:39.036538 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wwcz9" podStartSLOduration=3.129657094 podStartE2EDuration="9.036513765s" podCreationTimestamp="2026-02-27 19:27:30 +0000 UTC" firstStartedPulling="2026-02-27 19:27:31.880662787 +0000 UTC m=+9250.396460374" lastFinishedPulling="2026-02-27 19:27:37.787519458 +0000 UTC m=+9256.303317045" observedRunningTime="2026-02-27 19:27:39.029699753 +0000 UTC m=+9257.545497350" watchObservedRunningTime="2026-02-27 19:27:39.036513765 +0000 UTC m=+9257.552311352" Feb 27 19:27:40 crc kubenswrapper[4708]: I0227 19:27:40.462078 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:40 crc kubenswrapper[4708]: I0227 19:27:40.462983 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:40 crc kubenswrapper[4708]: I0227 19:27:40.515272 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:41 crc kubenswrapper[4708]: I0227 19:27:41.029239 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9wf4" event={"ID":"77b826c6-78ac-45a7-8812-493bc663e39e","Type":"ContainerStarted","Data":"d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261"} Feb 27 19:27:50 crc kubenswrapper[4708]: I0227 19:27:50.526615 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:50 crc kubenswrapper[4708]: I0227 19:27:50.583448 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwcz9"] Feb 27 19:27:50 crc kubenswrapper[4708]: I0227 19:27:50.700924 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wwcz9" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="registry-server" containerID="cri-o://b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27" gracePeriod=2 Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.513780 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.581197 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjxzn\" (UniqueName: \"kubernetes.io/projected/903d9c3a-00b7-4b18-9446-aaf55c9986ba-kube-api-access-sjxzn\") pod \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.581358 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-catalog-content\") pod \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.581414 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-utilities\") pod \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\" (UID: \"903d9c3a-00b7-4b18-9446-aaf55c9986ba\") " Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.582151 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-utilities" (OuterVolumeSpecName: "utilities") pod "903d9c3a-00b7-4b18-9446-aaf55c9986ba" (UID: "903d9c3a-00b7-4b18-9446-aaf55c9986ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.591467 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903d9c3a-00b7-4b18-9446-aaf55c9986ba-kube-api-access-sjxzn" (OuterVolumeSpecName: "kube-api-access-sjxzn") pod "903d9c3a-00b7-4b18-9446-aaf55c9986ba" (UID: "903d9c3a-00b7-4b18-9446-aaf55c9986ba"). InnerVolumeSpecName "kube-api-access-sjxzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.613221 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "903d9c3a-00b7-4b18-9446-aaf55c9986ba" (UID: "903d9c3a-00b7-4b18-9446-aaf55c9986ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.683897 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.683954 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/903d9c3a-00b7-4b18-9446-aaf55c9986ba-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.683967 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sjxzn\" (UniqueName: \"kubernetes.io/projected/903d9c3a-00b7-4b18-9446-aaf55c9986ba-kube-api-access-sjxzn\") on node \"crc\" DevicePath \"\"" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.713680 4708 generic.go:334] "Generic (PLEG): container finished" podID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerID="b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27" exitCode=0 Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.713724 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwcz9" event={"ID":"903d9c3a-00b7-4b18-9446-aaf55c9986ba","Type":"ContainerDied","Data":"b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27"} Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.713751 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wwcz9" event={"ID":"903d9c3a-00b7-4b18-9446-aaf55c9986ba","Type":"ContainerDied","Data":"533b2e923bddf94258466ee1b1fa0cf1e5918c77737e4bc77e16e85cb12ef695"} Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.713769 4708 scope.go:117] "RemoveContainer" containerID="b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.713916 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wwcz9" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.771665 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwcz9"] Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.787895 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wwcz9"] Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.788068 4708 scope.go:117] "RemoveContainer" containerID="b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.814302 4708 scope.go:117] "RemoveContainer" containerID="42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.861739 4708 scope.go:117] "RemoveContainer" containerID="b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27" Feb 27 19:27:51 crc kubenswrapper[4708]: E0227 19:27:51.862244 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27\": container with ID starting with b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27 not found: ID does not exist" containerID="b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.862307 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27"} err="failed to get container status \"b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27\": rpc error: code = NotFound desc = could not find container \"b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27\": container with ID starting with b430f29e76e261d691e294092ae9177ba8b608b71d746637f367e77c588cfc27 not found: ID does not exist" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.862348 4708 scope.go:117] "RemoveContainer" containerID="b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5" Feb 27 19:27:51 crc kubenswrapper[4708]: E0227 19:27:51.862711 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5\": container with ID starting with b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5 not found: ID does not exist" containerID="b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.862745 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5"} err="failed to get container status \"b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5\": rpc error: code = NotFound desc = could not find container \"b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5\": container with ID starting with b05eb5e335b6a141c0929cec3d828e3458151b6aaa88890cef788a87d7739ed5 not found: ID does not exist" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.862766 4708 scope.go:117] "RemoveContainer" containerID="42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe" Feb 27 19:27:51 crc kubenswrapper[4708]: E0227 19:27:51.863107 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe\": container with ID starting with 42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe not found: ID does not exist" containerID="42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe" Feb 27 19:27:51 crc kubenswrapper[4708]: I0227 19:27:51.863137 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe"} err="failed to get container status \"42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe\": rpc error: code = NotFound desc = could not find container \"42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe\": container with ID starting with 42455bae7b9a07883431463cd359831bf9dff9639e6a16b36a0940fbda7fb2fe not found: ID does not exist" Feb 27 19:27:52 crc kubenswrapper[4708]: I0227 19:27:52.240431 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" path="/var/lib/kubelet/pods/903d9c3a-00b7-4b18-9446-aaf55c9986ba/volumes" Feb 27 19:27:58 crc kubenswrapper[4708]: I0227 19:27:58.786622 4708 generic.go:334] "Generic (PLEG): container finished" podID="77b826c6-78ac-45a7-8812-493bc663e39e" containerID="d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261" exitCode=0 Feb 27 19:27:58 crc kubenswrapper[4708]: I0227 19:27:58.786682 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9wf4" event={"ID":"77b826c6-78ac-45a7-8812-493bc663e39e","Type":"ContainerDied","Data":"d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261"} Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.148478 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537008-n2666"] Feb 27 19:28:00 crc kubenswrapper[4708]: E0227 19:28:00.149367 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="extract-content" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.149386 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="extract-content" Feb 27 19:28:00 crc kubenswrapper[4708]: E0227 19:28:00.149444 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="registry-server" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.149453 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="registry-server" Feb 27 19:28:00 crc kubenswrapper[4708]: E0227 19:28:00.149467 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="extract-utilities" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.149475 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="extract-utilities" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.149733 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="903d9c3a-00b7-4b18-9446-aaf55c9986ba" containerName="registry-server" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.150630 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-n2666" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.155193 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.155299 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.155494 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.165530 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-n2666"] Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.268259 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpmmj\" (UniqueName: \"kubernetes.io/projected/340b6bcc-ec48-476e-b06c-40b190ee17d3-kube-api-access-bpmmj\") pod \"auto-csr-approver-29537008-n2666\" (UID: \"340b6bcc-ec48-476e-b06c-40b190ee17d3\") " pod="openshift-infra/auto-csr-approver-29537008-n2666" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.370813 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpmmj\" (UniqueName: \"kubernetes.io/projected/340b6bcc-ec48-476e-b06c-40b190ee17d3-kube-api-access-bpmmj\") pod \"auto-csr-approver-29537008-n2666\" (UID: \"340b6bcc-ec48-476e-b06c-40b190ee17d3\") " pod="openshift-infra/auto-csr-approver-29537008-n2666" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.404243 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpmmj\" (UniqueName: \"kubernetes.io/projected/340b6bcc-ec48-476e-b06c-40b190ee17d3-kube-api-access-bpmmj\") pod \"auto-csr-approver-29537008-n2666\" (UID: \"340b6bcc-ec48-476e-b06c-40b190ee17d3\") " pod="openshift-infra/auto-csr-approver-29537008-n2666" Feb 27 19:28:00 crc kubenswrapper[4708]: I0227 19:28:00.587637 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-n2666" Feb 27 19:28:01 crc kubenswrapper[4708]: I0227 19:28:01.117699 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-n2666"] Feb 27 19:28:01 crc kubenswrapper[4708]: I0227 19:28:01.824544 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9wf4" event={"ID":"77b826c6-78ac-45a7-8812-493bc663e39e","Type":"ContainerStarted","Data":"8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc"} Feb 27 19:28:01 crc kubenswrapper[4708]: I0227 19:28:01.826132 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537008-n2666" event={"ID":"340b6bcc-ec48-476e-b06c-40b190ee17d3","Type":"ContainerStarted","Data":"ad925124dc26816157adfae8a09e3dd79b25adcec766804dc8d075c20a0adfce"} Feb 27 19:28:01 crc kubenswrapper[4708]: I0227 19:28:01.844067 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-n9wf4" podStartSLOduration=3.281007366 podStartE2EDuration="24.844048321s" podCreationTimestamp="2026-02-27 19:27:37 +0000 UTC" firstStartedPulling="2026-02-27 19:27:39.008994909 +0000 UTC m=+9257.524792496" lastFinishedPulling="2026-02-27 19:28:00.572035864 +0000 UTC m=+9279.087833451" observedRunningTime="2026-02-27 19:28:01.839571305 +0000 UTC m=+9280.355368902" watchObservedRunningTime="2026-02-27 19:28:01.844048321 +0000 UTC m=+9280.359845908" Feb 27 19:28:03 crc kubenswrapper[4708]: I0227 19:28:03.845220 4708 generic.go:334] "Generic (PLEG): container finished" podID="340b6bcc-ec48-476e-b06c-40b190ee17d3" containerID="b74b9d4ad77d53aac94d94a38cfa9dd7f938ed696437359ef610353a07825b04" exitCode=0 Feb 27 19:28:03 crc kubenswrapper[4708]: I0227 19:28:03.845319 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537008-n2666" event={"ID":"340b6bcc-ec48-476e-b06c-40b190ee17d3","Type":"ContainerDied","Data":"b74b9d4ad77d53aac94d94a38cfa9dd7f938ed696437359ef610353a07825b04"} Feb 27 19:28:05 crc kubenswrapper[4708]: I0227 19:28:05.432919 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-n2666" Feb 27 19:28:05 crc kubenswrapper[4708]: I0227 19:28:05.584856 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpmmj\" (UniqueName: \"kubernetes.io/projected/340b6bcc-ec48-476e-b06c-40b190ee17d3-kube-api-access-bpmmj\") pod \"340b6bcc-ec48-476e-b06c-40b190ee17d3\" (UID: \"340b6bcc-ec48-476e-b06c-40b190ee17d3\") " Feb 27 19:28:05 crc kubenswrapper[4708]: I0227 19:28:05.591102 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/340b6bcc-ec48-476e-b06c-40b190ee17d3-kube-api-access-bpmmj" (OuterVolumeSpecName: "kube-api-access-bpmmj") pod "340b6bcc-ec48-476e-b06c-40b190ee17d3" (UID: "340b6bcc-ec48-476e-b06c-40b190ee17d3"). InnerVolumeSpecName "kube-api-access-bpmmj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:28:05 crc kubenswrapper[4708]: I0227 19:28:05.687061 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpmmj\" (UniqueName: \"kubernetes.io/projected/340b6bcc-ec48-476e-b06c-40b190ee17d3-kube-api-access-bpmmj\") on node \"crc\" DevicePath \"\"" Feb 27 19:28:05 crc kubenswrapper[4708]: I0227 19:28:05.866539 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537008-n2666" event={"ID":"340b6bcc-ec48-476e-b06c-40b190ee17d3","Type":"ContainerDied","Data":"ad925124dc26816157adfae8a09e3dd79b25adcec766804dc8d075c20a0adfce"} Feb 27 19:28:05 crc kubenswrapper[4708]: I0227 19:28:05.866596 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad925124dc26816157adfae8a09e3dd79b25adcec766804dc8d075c20a0adfce" Feb 27 19:28:05 crc kubenswrapper[4708]: I0227 19:28:05.866597 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537008-n2666" Feb 27 19:28:06 crc kubenswrapper[4708]: I0227 19:28:06.511355 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-7bp92"] Feb 27 19:28:06 crc kubenswrapper[4708]: I0227 19:28:06.521613 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537002-7bp92"] Feb 27 19:28:07 crc kubenswrapper[4708]: I0227 19:28:07.574329 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:28:07 crc kubenswrapper[4708]: I0227 19:28:07.574634 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:28:08 crc kubenswrapper[4708]: I0227 19:28:08.240407 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04db75a1-9ca7-41c9-80f3-4152c45549ff" path="/var/lib/kubelet/pods/04db75a1-9ca7-41c9-80f3-4152c45549ff/volumes" Feb 27 19:28:08 crc kubenswrapper[4708]: I0227 19:28:08.619357 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:08 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:08 crc kubenswrapper[4708]: > Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.792167 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fr2np"] Feb 27 19:28:13 crc kubenswrapper[4708]: E0227 19:28:13.793382 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="340b6bcc-ec48-476e-b06c-40b190ee17d3" containerName="oc" Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.793401 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="340b6bcc-ec48-476e-b06c-40b190ee17d3" containerName="oc" Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.793652 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="340b6bcc-ec48-476e-b06c-40b190ee17d3" containerName="oc" Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.795764 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.803936 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fr2np"] Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.955542 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgtbh\" (UniqueName: \"kubernetes.io/projected/38c7c7bc-2802-4361-b061-5de6de042f1a-kube-api-access-hgtbh\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.955626 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-utilities\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:13 crc kubenswrapper[4708]: I0227 19:28:13.955651 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-catalog-content\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.057640 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hgtbh\" (UniqueName: \"kubernetes.io/projected/38c7c7bc-2802-4361-b061-5de6de042f1a-kube-api-access-hgtbh\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.057746 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-utilities\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.057785 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-catalog-content\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.058378 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-utilities\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.058490 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-catalog-content\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.083173 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hgtbh\" (UniqueName: \"kubernetes.io/projected/38c7c7bc-2802-4361-b061-5de6de042f1a-kube-api-access-hgtbh\") pod \"community-operators-fr2np\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.125392 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.740419 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fr2np"] Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.963737 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerStarted","Data":"f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7"} Feb 27 19:28:14 crc kubenswrapper[4708]: I0227 19:28:14.964101 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerStarted","Data":"2bf0d53677f7607166603c188435de9a91045cd6e74a1d054a55012703362742"} Feb 27 19:28:15 crc kubenswrapper[4708]: I0227 19:28:15.980897 4708 generic.go:334] "Generic (PLEG): container finished" podID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerID="f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7" exitCode=0 Feb 27 19:28:15 crc kubenswrapper[4708]: I0227 19:28:15.980966 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerDied","Data":"f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7"} Feb 27 19:28:16 crc kubenswrapper[4708]: I0227 19:28:16.996243 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerStarted","Data":"0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614"} Feb 27 19:28:18 crc kubenswrapper[4708]: I0227 19:28:18.637308 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:18 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:18 crc kubenswrapper[4708]: > Feb 27 19:28:22 crc kubenswrapper[4708]: I0227 19:28:22.059541 4708 generic.go:334] "Generic (PLEG): container finished" podID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerID="0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614" exitCode=0 Feb 27 19:28:22 crc kubenswrapper[4708]: I0227 19:28:22.059596 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerDied","Data":"0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614"} Feb 27 19:28:24 crc kubenswrapper[4708]: I0227 19:28:24.085078 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerStarted","Data":"cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873"} Feb 27 19:28:24 crc kubenswrapper[4708]: I0227 19:28:24.114298 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fr2np" podStartSLOduration=3.748978028 podStartE2EDuration="11.114275273s" podCreationTimestamp="2026-02-27 19:28:13 +0000 UTC" firstStartedPulling="2026-02-27 19:28:15.983430336 +0000 UTC m=+9294.499227923" lastFinishedPulling="2026-02-27 19:28:23.348727581 +0000 UTC m=+9301.864525168" observedRunningTime="2026-02-27 19:28:24.102336246 +0000 UTC m=+9302.618133833" watchObservedRunningTime="2026-02-27 19:28:24.114275273 +0000 UTC m=+9302.630072860" Feb 27 19:28:24 crc kubenswrapper[4708]: I0227 19:28:24.126644 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:24 crc kubenswrapper[4708]: I0227 19:28:24.126685 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:25 crc kubenswrapper[4708]: I0227 19:28:25.173663 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fr2np" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:25 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:25 crc kubenswrapper[4708]: > Feb 27 19:28:28 crc kubenswrapper[4708]: I0227 19:28:28.616720 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:28 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:28 crc kubenswrapper[4708]: > Feb 27 19:28:32 crc kubenswrapper[4708]: I0227 19:28:32.534402 4708 scope.go:117] "RemoveContainer" containerID="6d911fdc132b3dc3f738c4e0977aaedaab46bc5575e7e804eeb68ea50c7933f2" Feb 27 19:28:35 crc kubenswrapper[4708]: I0227 19:28:35.170404 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fr2np" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:35 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:35 crc kubenswrapper[4708]: > Feb 27 19:28:38 crc kubenswrapper[4708]: I0227 19:28:38.621492 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:38 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:38 crc kubenswrapper[4708]: > Feb 27 19:28:45 crc kubenswrapper[4708]: I0227 19:28:45.217398 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-fr2np" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:45 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:45 crc kubenswrapper[4708]: > Feb 27 19:28:48 crc kubenswrapper[4708]: I0227 19:28:48.630400 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:48 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:48 crc kubenswrapper[4708]: > Feb 27 19:28:54 crc kubenswrapper[4708]: I0227 19:28:54.211302 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:54 crc kubenswrapper[4708]: I0227 19:28:54.284979 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:54 crc kubenswrapper[4708]: I0227 19:28:54.465464 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fr2np"] Feb 27 19:28:55 crc kubenswrapper[4708]: I0227 19:28:55.385159 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-fr2np" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="registry-server" containerID="cri-o://cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873" gracePeriod=2 Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.348275 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.398592 4708 generic.go:334] "Generic (PLEG): container finished" podID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerID="cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873" exitCode=0 Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.398629 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerDied","Data":"cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873"} Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.398657 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fr2np" event={"ID":"38c7c7bc-2802-4361-b061-5de6de042f1a","Type":"ContainerDied","Data":"2bf0d53677f7607166603c188435de9a91045cd6e74a1d054a55012703362742"} Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.398664 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fr2np" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.398672 4708 scope.go:117] "RemoveContainer" containerID="cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.448689 4708 scope.go:117] "RemoveContainer" containerID="0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.499613 4708 scope.go:117] "RemoveContainer" containerID="f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.503628 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-utilities\") pod \"38c7c7bc-2802-4361-b061-5de6de042f1a\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.503842 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-catalog-content\") pod \"38c7c7bc-2802-4361-b061-5de6de042f1a\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.504271 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgtbh\" (UniqueName: \"kubernetes.io/projected/38c7c7bc-2802-4361-b061-5de6de042f1a-kube-api-access-hgtbh\") pod \"38c7c7bc-2802-4361-b061-5de6de042f1a\" (UID: \"38c7c7bc-2802-4361-b061-5de6de042f1a\") " Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.504834 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-utilities" (OuterVolumeSpecName: "utilities") pod "38c7c7bc-2802-4361-b061-5de6de042f1a" (UID: "38c7c7bc-2802-4361-b061-5de6de042f1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.505127 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.514078 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38c7c7bc-2802-4361-b061-5de6de042f1a-kube-api-access-hgtbh" (OuterVolumeSpecName: "kube-api-access-hgtbh") pod "38c7c7bc-2802-4361-b061-5de6de042f1a" (UID: "38c7c7bc-2802-4361-b061-5de6de042f1a"). InnerVolumeSpecName "kube-api-access-hgtbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.568781 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38c7c7bc-2802-4361-b061-5de6de042f1a" (UID: "38c7c7bc-2802-4361-b061-5de6de042f1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.581835 4708 scope.go:117] "RemoveContainer" containerID="cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873" Feb 27 19:28:56 crc kubenswrapper[4708]: E0227 19:28:56.582372 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873\": container with ID starting with cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873 not found: ID does not exist" containerID="cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.582422 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873"} err="failed to get container status \"cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873\": rpc error: code = NotFound desc = could not find container \"cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873\": container with ID starting with cdd7c47043bc4a331662adef5c85ec6bd783ee0c40827f20a72ebeecab99b873 not found: ID does not exist" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.582454 4708 scope.go:117] "RemoveContainer" containerID="0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614" Feb 27 19:28:56 crc kubenswrapper[4708]: E0227 19:28:56.583314 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614\": container with ID starting with 0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614 not found: ID does not exist" containerID="0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.583654 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614"} err="failed to get container status \"0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614\": rpc error: code = NotFound desc = could not find container \"0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614\": container with ID starting with 0f0d4f0c6cb17bc97958954b41bcca4152315642771c0fd90fdf1233931ba614 not found: ID does not exist" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.583727 4708 scope.go:117] "RemoveContainer" containerID="f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7" Feb 27 19:28:56 crc kubenswrapper[4708]: E0227 19:28:56.584559 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7\": container with ID starting with f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7 not found: ID does not exist" containerID="f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.584601 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7"} err="failed to get container status \"f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7\": rpc error: code = NotFound desc = could not find container \"f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7\": container with ID starting with f90dc1209d7d21f279c59e3979497447e552b8018c93c81a86579d82ab88a3b7 not found: ID does not exist" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.606979 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgtbh\" (UniqueName: \"kubernetes.io/projected/38c7c7bc-2802-4361-b061-5de6de042f1a-kube-api-access-hgtbh\") on node \"crc\" DevicePath \"\"" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.607021 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38c7c7bc-2802-4361-b061-5de6de042f1a-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.807896 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-fr2np"] Feb 27 19:28:56 crc kubenswrapper[4708]: I0227 19:28:56.833800 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-fr2np"] Feb 27 19:28:58 crc kubenswrapper[4708]: I0227 19:28:58.241489 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" path="/var/lib/kubelet/pods/38c7c7bc-2802-4361-b061-5de6de042f1a/volumes" Feb 27 19:28:58 crc kubenswrapper[4708]: I0227 19:28:58.621460 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" probeResult="failure" output=< Feb 27 19:28:58 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:28:58 crc kubenswrapper[4708]: > Feb 27 19:29:05 crc kubenswrapper[4708]: I0227 19:29:05.631252 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:29:05 crc kubenswrapper[4708]: I0227 19:29:05.631794 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:29:08 crc kubenswrapper[4708]: I0227 19:29:08.631575 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" probeResult="failure" output=< Feb 27 19:29:08 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:29:08 crc kubenswrapper[4708]: > Feb 27 19:29:17 crc kubenswrapper[4708]: I0227 19:29:17.632479 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:29:17 crc kubenswrapper[4708]: I0227 19:29:17.695805 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:29:17 crc kubenswrapper[4708]: I0227 19:29:17.873771 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9wf4"] Feb 27 19:29:19 crc kubenswrapper[4708]: I0227 19:29:19.617006 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-n9wf4" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" containerID="cri-o://8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc" gracePeriod=2 Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.460927 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.513503 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-utilities\") pod \"77b826c6-78ac-45a7-8812-493bc663e39e\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.513781 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tjznr\" (UniqueName: \"kubernetes.io/projected/77b826c6-78ac-45a7-8812-493bc663e39e-kube-api-access-tjznr\") pod \"77b826c6-78ac-45a7-8812-493bc663e39e\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.513848 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-catalog-content\") pod \"77b826c6-78ac-45a7-8812-493bc663e39e\" (UID: \"77b826c6-78ac-45a7-8812-493bc663e39e\") " Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.514371 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-utilities" (OuterVolumeSpecName: "utilities") pod "77b826c6-78ac-45a7-8812-493bc663e39e" (UID: "77b826c6-78ac-45a7-8812-493bc663e39e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.517155 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.537163 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77b826c6-78ac-45a7-8812-493bc663e39e-kube-api-access-tjznr" (OuterVolumeSpecName: "kube-api-access-tjznr") pod "77b826c6-78ac-45a7-8812-493bc663e39e" (UID: "77b826c6-78ac-45a7-8812-493bc663e39e"). InnerVolumeSpecName "kube-api-access-tjznr". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.618897 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tjznr\" (UniqueName: \"kubernetes.io/projected/77b826c6-78ac-45a7-8812-493bc663e39e-kube-api-access-tjznr\") on node \"crc\" DevicePath \"\"" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.630631 4708 generic.go:334] "Generic (PLEG): container finished" podID="77b826c6-78ac-45a7-8812-493bc663e39e" containerID="8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc" exitCode=0 Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.630669 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9wf4" event={"ID":"77b826c6-78ac-45a7-8812-493bc663e39e","Type":"ContainerDied","Data":"8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc"} Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.630695 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-n9wf4" event={"ID":"77b826c6-78ac-45a7-8812-493bc663e39e","Type":"ContainerDied","Data":"53974ffb9dc501cbe746281fe54d442eb1c7255d563aca50b0715ace10cb2e09"} Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.630712 4708 scope.go:117] "RemoveContainer" containerID="8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.630825 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-n9wf4" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.650540 4708 scope.go:117] "RemoveContainer" containerID="d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.651266 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "77b826c6-78ac-45a7-8812-493bc663e39e" (UID: "77b826c6-78ac-45a7-8812-493bc663e39e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.676103 4708 scope.go:117] "RemoveContainer" containerID="61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.721050 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/77b826c6-78ac-45a7-8812-493bc663e39e-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.733496 4708 scope.go:117] "RemoveContainer" containerID="8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc" Feb 27 19:29:20 crc kubenswrapper[4708]: E0227 19:29:20.734083 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc\": container with ID starting with 8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc not found: ID does not exist" containerID="8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.734124 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc"} err="failed to get container status \"8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc\": rpc error: code = NotFound desc = could not find container \"8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc\": container with ID starting with 8b37f141533ce83150510b3deff1e5764ca2e8d78b466a52a9fd86df9318d3cc not found: ID does not exist" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.734148 4708 scope.go:117] "RemoveContainer" containerID="d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261" Feb 27 19:29:20 crc kubenswrapper[4708]: E0227 19:29:20.735123 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261\": container with ID starting with d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261 not found: ID does not exist" containerID="d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.735148 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261"} err="failed to get container status \"d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261\": rpc error: code = NotFound desc = could not find container \"d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261\": container with ID starting with d4848ef888a40bd32f12ca962a1f970c8aaf3d26e48ff925b26aa05feea51261 not found: ID does not exist" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.735169 4708 scope.go:117] "RemoveContainer" containerID="61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073" Feb 27 19:29:20 crc kubenswrapper[4708]: E0227 19:29:20.735775 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073\": container with ID starting with 61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073 not found: ID does not exist" containerID="61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.735826 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073"} err="failed to get container status \"61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073\": rpc error: code = NotFound desc = could not find container \"61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073\": container with ID starting with 61a77133bb3ca8b46b1222fa96180f574918717518b32c6088b74a64f80ab073 not found: ID does not exist" Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.967921 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-n9wf4"] Feb 27 19:29:20 crc kubenswrapper[4708]: I0227 19:29:20.978014 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-n9wf4"] Feb 27 19:29:22 crc kubenswrapper[4708]: I0227 19:29:22.248255 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" path="/var/lib/kubelet/pods/77b826c6-78ac-45a7-8812-493bc663e39e/volumes" Feb 27 19:29:35 crc kubenswrapper[4708]: I0227 19:29:35.632064 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:29:35 crc kubenswrapper[4708]: I0227 19:29:35.632593 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.155036 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537010-rkbwk"] Feb 27 19:30:00 crc kubenswrapper[4708]: E0227 19:30:00.156143 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="extract-utilities" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156161 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="extract-utilities" Feb 27 19:30:00 crc kubenswrapper[4708]: E0227 19:30:00.156190 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="registry-server" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156199 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="registry-server" Feb 27 19:30:00 crc kubenswrapper[4708]: E0227 19:30:00.156209 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="extract-content" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156218 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="extract-content" Feb 27 19:30:00 crc kubenswrapper[4708]: E0227 19:30:00.156249 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="extract-utilities" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156256 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="extract-utilities" Feb 27 19:30:00 crc kubenswrapper[4708]: E0227 19:30:00.156268 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="extract-content" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156275 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="extract-content" Feb 27 19:30:00 crc kubenswrapper[4708]: E0227 19:30:00.156292 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156299 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156558 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="38c7c7bc-2802-4361-b061-5de6de042f1a" containerName="registry-server" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.156572 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="77b826c6-78ac-45a7-8812-493bc663e39e" containerName="registry-server" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.157519 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-rkbwk" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.158073 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl"] Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.159362 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.159788 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.160872 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.161081 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.161583 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.163210 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.167648 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-rkbwk"] Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.180126 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl"] Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.251292 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df495444-0d98-4704-955b-e3c41653b2e0-secret-volume\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.251360 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df495444-0d98-4704-955b-e3c41653b2e0-config-volume\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.251445 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krn9h\" (UniqueName: \"kubernetes.io/projected/df495444-0d98-4704-955b-e3c41653b2e0-kube-api-access-krn9h\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.251514 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkcsz\" (UniqueName: \"kubernetes.io/projected/d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b-kube-api-access-bkcsz\") pod \"auto-csr-approver-29537010-rkbwk\" (UID: \"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b\") " pod="openshift-infra/auto-csr-approver-29537010-rkbwk" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.353329 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkcsz\" (UniqueName: \"kubernetes.io/projected/d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b-kube-api-access-bkcsz\") pod \"auto-csr-approver-29537010-rkbwk\" (UID: \"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b\") " pod="openshift-infra/auto-csr-approver-29537010-rkbwk" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.353443 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df495444-0d98-4704-955b-e3c41653b2e0-secret-volume\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.353482 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df495444-0d98-4704-955b-e3c41653b2e0-config-volume\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.353540 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krn9h\" (UniqueName: \"kubernetes.io/projected/df495444-0d98-4704-955b-e3c41653b2e0-kube-api-access-krn9h\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.354725 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df495444-0d98-4704-955b-e3c41653b2e0-config-volume\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.359960 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df495444-0d98-4704-955b-e3c41653b2e0-secret-volume\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.368980 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkcsz\" (UniqueName: \"kubernetes.io/projected/d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b-kube-api-access-bkcsz\") pod \"auto-csr-approver-29537010-rkbwk\" (UID: \"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b\") " pod="openshift-infra/auto-csr-approver-29537010-rkbwk" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.373475 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krn9h\" (UniqueName: \"kubernetes.io/projected/df495444-0d98-4704-955b-e3c41653b2e0-kube-api-access-krn9h\") pod \"collect-profiles-29537010-ldgtl\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.480477 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-rkbwk" Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.494241 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:00 crc kubenswrapper[4708]: W0227 19:30:00.973083 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd2b05b3e_5563_4b29_8c1a_e604f0bf9b3b.slice/crio-430bd1e2396b32883e74c2810eabce7fb78eaa464e2880cceab31efe35999841 WatchSource:0}: Error finding container 430bd1e2396b32883e74c2810eabce7fb78eaa464e2880cceab31efe35999841: Status 404 returned error can't find the container with id 430bd1e2396b32883e74c2810eabce7fb78eaa464e2880cceab31efe35999841 Feb 27 19:30:00 crc kubenswrapper[4708]: I0227 19:30:00.982516 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-rkbwk"] Feb 27 19:30:01 crc kubenswrapper[4708]: I0227 19:30:01.010282 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537010-rkbwk" event={"ID":"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b","Type":"ContainerStarted","Data":"430bd1e2396b32883e74c2810eabce7fb78eaa464e2880cceab31efe35999841"} Feb 27 19:30:01 crc kubenswrapper[4708]: I0227 19:30:01.137765 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl"] Feb 27 19:30:01 crc kubenswrapper[4708]: W0227 19:30:01.146177 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf495444_0d98_4704_955b_e3c41653b2e0.slice/crio-7da43bf5b889c17df917698a5374c0285dfbed4c8c28a03a477422ff0cd8727e WatchSource:0}: Error finding container 7da43bf5b889c17df917698a5374c0285dfbed4c8c28a03a477422ff0cd8727e: Status 404 returned error can't find the container with id 7da43bf5b889c17df917698a5374c0285dfbed4c8c28a03a477422ff0cd8727e Feb 27 19:30:02 crc kubenswrapper[4708]: I0227 19:30:02.023018 4708 generic.go:334] "Generic (PLEG): container finished" podID="a2707740-9be6-47c5-996c-43c292ad9758" containerID="fc266ff828e5db3ba425b016b81194c5ce306911feca30e154d9073aa05ac365" exitCode=1 Feb 27 19:30:02 crc kubenswrapper[4708]: I0227 19:30:02.023113 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a2707740-9be6-47c5-996c-43c292ad9758","Type":"ContainerDied","Data":"fc266ff828e5db3ba425b016b81194c5ce306911feca30e154d9073aa05ac365"} Feb 27 19:30:02 crc kubenswrapper[4708]: I0227 19:30:02.028168 4708 generic.go:334] "Generic (PLEG): container finished" podID="df495444-0d98-4704-955b-e3c41653b2e0" containerID="964c40840f180ca2f1c1402447b6a0211ae3946e68199996da2179b9e9772814" exitCode=0 Feb 27 19:30:02 crc kubenswrapper[4708]: I0227 19:30:02.028259 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" event={"ID":"df495444-0d98-4704-955b-e3c41653b2e0","Type":"ContainerDied","Data":"964c40840f180ca2f1c1402447b6a0211ae3946e68199996da2179b9e9772814"} Feb 27 19:30:02 crc kubenswrapper[4708]: I0227 19:30:02.028620 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" event={"ID":"df495444-0d98-4704-955b-e3c41653b2e0","Type":"ContainerStarted","Data":"7da43bf5b889c17df917698a5374c0285dfbed4c8c28a03a477422ff0cd8727e"} Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.856830 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.932477 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-config-data\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.932518 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config-secret\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.932564 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ca-certs\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.932747 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-workdir\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.933052 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.933096 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.933151 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-temporary\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.933186 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ssh-key\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.933220 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t75ct\" (UniqueName: \"kubernetes.io/projected/a2707740-9be6-47c5-996c-43c292ad9758-kube-api-access-t75ct\") pod \"a2707740-9be6-47c5-996c-43c292ad9758\" (UID: \"a2707740-9be6-47c5-996c-43c292ad9758\") " Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.934325 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-config-data" (OuterVolumeSpecName: "config-data") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.934950 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.955461 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "test-operator-logs") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.965378 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2707740-9be6-47c5-996c-43c292ad9758-kube-api-access-t75ct" (OuterVolumeSpecName: "kube-api-access-t75ct") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "kube-api-access-t75ct". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.981863 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.982410 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:30:03 crc kubenswrapper[4708]: I0227 19:30:03.987528 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.035624 4708 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.035655 4708 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.035667 4708 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ssh-key\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.035676 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t75ct\" (UniqueName: \"kubernetes.io/projected/a2707740-9be6-47c5-996c-43c292ad9758-kube-api-access-t75ct\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.035684 4708 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.035692 4708 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-config-data\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.035701 4708 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/a2707740-9be6-47c5-996c-43c292ad9758-ca-certs\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.057529 4708 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.058016 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"a2707740-9be6-47c5-996c-43c292ad9758","Type":"ContainerDied","Data":"a1a561dc32572ff6ac2d24747f3ff23d94717e6c85c0d9b4bc1e24f76f5ee6f9"} Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.058047 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a561dc32572ff6ac2d24747f3ff23d94717e6c85c0d9b4bc1e24f76f5ee6f9" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.058349 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.061425 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" event={"ID":"df495444-0d98-4704-955b-e3c41653b2e0","Type":"ContainerDied","Data":"7da43bf5b889c17df917698a5374c0285dfbed4c8c28a03a477422ff0cd8727e"} Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.061445 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7da43bf5b889c17df917698a5374c0285dfbed4c8c28a03a477422ff0cd8727e" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.064610 4708 generic.go:334] "Generic (PLEG): container finished" podID="d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b" containerID="275188da689f30ffec3a377e3392df65cb025c7443247e1f539478b694fd3fc5" exitCode=0 Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.064657 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537010-rkbwk" event={"ID":"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b","Type":"ContainerDied","Data":"275188da689f30ffec3a377e3392df65cb025c7443247e1f539478b694fd3fc5"} Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.092958 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.137130 4708 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.137157 4708 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/a2707740-9be6-47c5-996c-43c292ad9758-openstack-config\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.170995 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.238122 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df495444-0d98-4704-955b-e3c41653b2e0-secret-volume\") pod \"df495444-0d98-4704-955b-e3c41653b2e0\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.238286 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krn9h\" (UniqueName: \"kubernetes.io/projected/df495444-0d98-4704-955b-e3c41653b2e0-kube-api-access-krn9h\") pod \"df495444-0d98-4704-955b-e3c41653b2e0\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.238744 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df495444-0d98-4704-955b-e3c41653b2e0-config-volume\") pod \"df495444-0d98-4704-955b-e3c41653b2e0\" (UID: \"df495444-0d98-4704-955b-e3c41653b2e0\") " Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.239903 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df495444-0d98-4704-955b-e3c41653b2e0-config-volume" (OuterVolumeSpecName: "config-volume") pod "df495444-0d98-4704-955b-e3c41653b2e0" (UID: "df495444-0d98-4704-955b-e3c41653b2e0"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.242379 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df495444-0d98-4704-955b-e3c41653b2e0-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "df495444-0d98-4704-955b-e3c41653b2e0" (UID: "df495444-0d98-4704-955b-e3c41653b2e0"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.243371 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df495444-0d98-4704-955b-e3c41653b2e0-kube-api-access-krn9h" (OuterVolumeSpecName: "kube-api-access-krn9h") pod "df495444-0d98-4704-955b-e3c41653b2e0" (UID: "df495444-0d98-4704-955b-e3c41653b2e0"). InnerVolumeSpecName "kube-api-access-krn9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.342543 4708 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df495444-0d98-4704-955b-e3c41653b2e0-config-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.342597 4708 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/df495444-0d98-4704-955b-e3c41653b2e0-secret-volume\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.342613 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krn9h\" (UniqueName: \"kubernetes.io/projected/df495444-0d98-4704-955b-e3c41653b2e0-kube-api-access-krn9h\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.431742 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "a2707740-9be6-47c5-996c-43c292ad9758" (UID: "a2707740-9be6-47c5-996c-43c292ad9758"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:30:04 crc kubenswrapper[4708]: I0227 19:30:04.444676 4708 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/a2707740-9be6-47c5-996c-43c292ad9758-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.073688 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29537010-ldgtl" Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.242943 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958"] Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.252385 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29536965-sv958"] Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.631697 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.631754 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.631808 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.632676 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.632740 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" gracePeriod=600 Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.688823 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-rkbwk" Feb 27 19:30:05 crc kubenswrapper[4708]: E0227 19:30:05.758832 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.775617 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bkcsz\" (UniqueName: \"kubernetes.io/projected/d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b-kube-api-access-bkcsz\") pod \"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b\" (UID: \"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b\") " Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.781523 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b-kube-api-access-bkcsz" (OuterVolumeSpecName: "kube-api-access-bkcsz") pod "d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b" (UID: "d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b"). InnerVolumeSpecName "kube-api-access-bkcsz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:30:05 crc kubenswrapper[4708]: I0227 19:30:05.878242 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bkcsz\" (UniqueName: \"kubernetes.io/projected/d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b-kube-api-access-bkcsz\") on node \"crc\" DevicePath \"\"" Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.085107 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537010-rkbwk" event={"ID":"d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b","Type":"ContainerDied","Data":"430bd1e2396b32883e74c2810eabce7fb78eaa464e2880cceab31efe35999841"} Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.085140 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537010-rkbwk" Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.085153 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="430bd1e2396b32883e74c2810eabce7fb78eaa464e2880cceab31efe35999841" Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.088021 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" exitCode=0 Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.088062 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4"} Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.088101 4708 scope.go:117] "RemoveContainer" containerID="ffc2e38e46d1828467c934992ae971675459062995acbb761f6a672671c2fe7a" Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.088911 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:30:06 crc kubenswrapper[4708]: E0227 19:30:06.089167 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.242874 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ad7061d-ae06-46d3-8cca-76bc071bfe32" path="/var/lib/kubelet/pods/0ad7061d-ae06-46d3-8cca-76bc071bfe32/volumes" Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.754387 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-7fhss"] Feb 27 19:30:06 crc kubenswrapper[4708]: I0227 19:30:06.765562 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537004-7fhss"] Feb 27 19:30:08 crc kubenswrapper[4708]: I0227 19:30:08.241570 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b796b61-9ca5-4888-8285-b246e1e6fc4c" path="/var/lib/kubelet/pods/4b796b61-9ca5-4888-8285-b246e1e6fc4c/volumes" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.158643 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 19:30:16 crc kubenswrapper[4708]: E0227 19:30:16.159610 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b" containerName="oc" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.159626 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b" containerName="oc" Feb 27 19:30:16 crc kubenswrapper[4708]: E0227 19:30:16.159640 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2707740-9be6-47c5-996c-43c292ad9758" containerName="tempest-tests-tempest-tests-runner" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.159652 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2707740-9be6-47c5-996c-43c292ad9758" containerName="tempest-tests-tempest-tests-runner" Feb 27 19:30:16 crc kubenswrapper[4708]: E0227 19:30:16.159691 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df495444-0d98-4704-955b-e3c41653b2e0" containerName="collect-profiles" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.159699 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="df495444-0d98-4704-955b-e3c41653b2e0" containerName="collect-profiles" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.159966 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="df495444-0d98-4704-955b-e3c41653b2e0" containerName="collect-profiles" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.160010 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b" containerName="oc" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.160023 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2707740-9be6-47c5-996c-43c292ad9758" containerName="tempest-tests-tempest-tests-runner" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.160960 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.167930 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-958xs" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.170510 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.297329 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40ab3d43-846c-4b15-93e9-3b63e179fa73\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.297383 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kctdx\" (UniqueName: \"kubernetes.io/projected/40ab3d43-846c-4b15-93e9-3b63e179fa73-kube-api-access-kctdx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40ab3d43-846c-4b15-93e9-3b63e179fa73\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.399007 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kctdx\" (UniqueName: \"kubernetes.io/projected/40ab3d43-846c-4b15-93e9-3b63e179fa73-kube-api-access-kctdx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40ab3d43-846c-4b15-93e9-3b63e179fa73\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.399067 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40ab3d43-846c-4b15-93e9-3b63e179fa73\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.399529 4708 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40ab3d43-846c-4b15-93e9-3b63e179fa73\") device mount path \"/mnt/openstack/pv11\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.423138 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kctdx\" (UniqueName: \"kubernetes.io/projected/40ab3d43-846c-4b15-93e9-3b63e179fa73-kube-api-access-kctdx\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40ab3d43-846c-4b15-93e9-3b63e179fa73\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.427919 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"40ab3d43-846c-4b15-93e9-3b63e179fa73\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.489414 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Feb 27 19:30:16 crc kubenswrapper[4708]: I0227 19:30:16.965009 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Feb 27 19:30:17 crc kubenswrapper[4708]: I0227 19:30:17.213542 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"40ab3d43-846c-4b15-93e9-3b63e179fa73","Type":"ContainerStarted","Data":"1d5d009ea2d8ac420f4ef1729f83e150a86df74224e0de89fa07092ab5050ee0"} Feb 27 19:30:17 crc kubenswrapper[4708]: I0227 19:30:17.228258 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:30:17 crc kubenswrapper[4708]: E0227 19:30:17.228533 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:30:18 crc kubenswrapper[4708]: I0227 19:30:18.223182 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"40ab3d43-846c-4b15-93e9-3b63e179fa73","Type":"ContainerStarted","Data":"28b21677f8104579b562f223d12b759c466d96441449ac036cc9afc3802d9c5f"} Feb 27 19:30:18 crc kubenswrapper[4708]: I0227 19:30:18.240219 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=1.378294307 podStartE2EDuration="2.240194197s" podCreationTimestamp="2026-02-27 19:30:16 +0000 UTC" firstStartedPulling="2026-02-27 19:30:16.96604142 +0000 UTC m=+9415.481839007" lastFinishedPulling="2026-02-27 19:30:17.82794131 +0000 UTC m=+9416.343738897" observedRunningTime="2026-02-27 19:30:18.237518642 +0000 UTC m=+9416.753316229" watchObservedRunningTime="2026-02-27 19:30:18.240194197 +0000 UTC m=+9416.755991804" Feb 27 19:30:32 crc kubenswrapper[4708]: I0227 19:30:32.238320 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:30:32 crc kubenswrapper[4708]: E0227 19:30:32.239229 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:30:32 crc kubenswrapper[4708]: I0227 19:30:32.719008 4708 scope.go:117] "RemoveContainer" containerID="c79b4ed1fe4777a4b0ef25111dea470667d9571a0ed51c8eef0bd0a106026ea8" Feb 27 19:30:32 crc kubenswrapper[4708]: I0227 19:30:32.763405 4708 scope.go:117] "RemoveContainer" containerID="c3856c3add79b215b9d90229b3dae36ead5e4240fc15f251713d331dd4b3694d" Feb 27 19:30:45 crc kubenswrapper[4708]: I0227 19:30:45.228606 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:30:45 crc kubenswrapper[4708]: E0227 19:30:45.229341 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.916574 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b2jlw/must-gather-m8mj6"] Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.918380 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.920531 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-b2jlw"/"default-dockercfg-mgnfx" Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.920789 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b2jlw"/"kube-root-ca.crt" Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.920979 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-b2jlw"/"openshift-service-ca.crt" Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.935755 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b2jlw/must-gather-m8mj6"] Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.958936 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpmk2\" (UniqueName: \"kubernetes.io/projected/07f53736-cb9c-4e3e-b732-5c089ac23985-kube-api-access-xpmk2\") pod \"must-gather-m8mj6\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:46 crc kubenswrapper[4708]: I0227 19:30:46.959055 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07f53736-cb9c-4e3e-b732-5c089ac23985-must-gather-output\") pod \"must-gather-m8mj6\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:47 crc kubenswrapper[4708]: I0227 19:30:47.061495 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpmk2\" (UniqueName: \"kubernetes.io/projected/07f53736-cb9c-4e3e-b732-5c089ac23985-kube-api-access-xpmk2\") pod \"must-gather-m8mj6\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:47 crc kubenswrapper[4708]: I0227 19:30:47.061575 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07f53736-cb9c-4e3e-b732-5c089ac23985-must-gather-output\") pod \"must-gather-m8mj6\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:47 crc kubenswrapper[4708]: I0227 19:30:47.062189 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07f53736-cb9c-4e3e-b732-5c089ac23985-must-gather-output\") pod \"must-gather-m8mj6\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:47 crc kubenswrapper[4708]: I0227 19:30:47.086179 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpmk2\" (UniqueName: \"kubernetes.io/projected/07f53736-cb9c-4e3e-b732-5c089ac23985-kube-api-access-xpmk2\") pod \"must-gather-m8mj6\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:47 crc kubenswrapper[4708]: I0227 19:30:47.240366 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:30:47 crc kubenswrapper[4708]: I0227 19:30:47.762740 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-b2jlw/must-gather-m8mj6"] Feb 27 19:30:48 crc kubenswrapper[4708]: I0227 19:30:48.528689 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" event={"ID":"07f53736-cb9c-4e3e-b732-5c089ac23985","Type":"ContainerStarted","Data":"24248746dbc0aec227cc89d67ecfffa4e86342422975e8da0588fb447ba784ba"} Feb 27 19:30:55 crc kubenswrapper[4708]: I0227 19:30:55.602560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" event={"ID":"07f53736-cb9c-4e3e-b732-5c089ac23985","Type":"ContainerStarted","Data":"071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b"} Feb 27 19:30:55 crc kubenswrapper[4708]: I0227 19:30:55.603223 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" event={"ID":"07f53736-cb9c-4e3e-b732-5c089ac23985","Type":"ContainerStarted","Data":"d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99"} Feb 27 19:30:55 crc kubenswrapper[4708]: I0227 19:30:55.624109 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" podStartSLOduration=2.333772111 podStartE2EDuration="9.624087261s" podCreationTimestamp="2026-02-27 19:30:46 +0000 UTC" firstStartedPulling="2026-02-27 19:30:47.769173118 +0000 UTC m=+9446.284970705" lastFinishedPulling="2026-02-27 19:30:55.059488268 +0000 UTC m=+9453.575285855" observedRunningTime="2026-02-27 19:30:55.615961322 +0000 UTC m=+9454.131758919" watchObservedRunningTime="2026-02-27 19:30:55.624087261 +0000 UTC m=+9454.139884848" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.203429 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-kgrz4"] Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.205686 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.231804 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:30:59 crc kubenswrapper[4708]: E0227 19:30:59.232131 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.371417 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-host\") pod \"crc-debug-kgrz4\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.372121 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnxrs\" (UniqueName: \"kubernetes.io/projected/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-kube-api-access-gnxrs\") pod \"crc-debug-kgrz4\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.474706 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-host\") pod \"crc-debug-kgrz4\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.474902 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnxrs\" (UniqueName: \"kubernetes.io/projected/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-kube-api-access-gnxrs\") pod \"crc-debug-kgrz4\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.475123 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-host\") pod \"crc-debug-kgrz4\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.511639 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnxrs\" (UniqueName: \"kubernetes.io/projected/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-kube-api-access-gnxrs\") pod \"crc-debug-kgrz4\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.530763 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:30:59 crc kubenswrapper[4708]: I0227 19:30:59.643983 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" event={"ID":"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98","Type":"ContainerStarted","Data":"e7d3388cec17a9684df059dd26b7eeb159422b2b18cfcb58b5e387900c6aa5a2"} Feb 27 19:31:10 crc kubenswrapper[4708]: I0227 19:31:10.228910 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:31:10 crc kubenswrapper[4708]: E0227 19:31:10.229649 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:31:15 crc kubenswrapper[4708]: E0227 19:31:15.283214 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Feb 27 19:31:15 crc kubenswrapper[4708]: E0227 19:31:15.284778 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,firewall_tables,firewalld,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gnxrs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-kgrz4_openshift-must-gather-b2jlw(0b7aef62-1a98-4a3e-99cd-f1f481a6dc98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Feb 27 19:31:15 crc kubenswrapper[4708]: E0227 19:31:15.286132 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" podUID="0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" Feb 27 19:31:15 crc kubenswrapper[4708]: E0227 19:31:15.813321 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" podUID="0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" Feb 27 19:31:22 crc kubenswrapper[4708]: I0227 19:31:22.239681 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:31:22 crc kubenswrapper[4708]: E0227 19:31:22.240541 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:31:29 crc kubenswrapper[4708]: I0227 19:31:29.230866 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:31:29 crc kubenswrapper[4708]: I0227 19:31:29.945560 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" event={"ID":"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98","Type":"ContainerStarted","Data":"0b0038c5c9e9cd04cdf6be268bc527dd307d6e5cbb2ae43f65cdf0ec7a71a222"} Feb 27 19:31:29 crc kubenswrapper[4708]: I0227 19:31:29.960626 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" podStartSLOduration=0.851046961 podStartE2EDuration="30.960606856s" podCreationTimestamp="2026-02-27 19:30:59 +0000 UTC" firstStartedPulling="2026-02-27 19:30:59.588609619 +0000 UTC m=+9458.104407206" lastFinishedPulling="2026-02-27 19:31:29.698169514 +0000 UTC m=+9488.213967101" observedRunningTime="2026-02-27 19:31:29.955975256 +0000 UTC m=+9488.471772843" watchObservedRunningTime="2026-02-27 19:31:29.960606856 +0000 UTC m=+9488.476404443" Feb 27 19:31:37 crc kubenswrapper[4708]: I0227 19:31:37.257913 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:31:37 crc kubenswrapper[4708]: E0227 19:31:37.258915 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:31:51 crc kubenswrapper[4708]: I0227 19:31:51.228994 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:31:51 crc kubenswrapper[4708]: E0227 19:31:51.230677 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.153668 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537012-99k6z"] Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.155770 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-99k6z" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.164446 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.164474 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.164541 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.169047 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-99k6z"] Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.302502 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhr6b\" (UniqueName: \"kubernetes.io/projected/440d9f6e-2360-49dc-bf60-0a544c990079-kube-api-access-rhr6b\") pod \"auto-csr-approver-29537012-99k6z\" (UID: \"440d9f6e-2360-49dc-bf60-0a544c990079\") " pod="openshift-infra/auto-csr-approver-29537012-99k6z" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.404081 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rhr6b\" (UniqueName: \"kubernetes.io/projected/440d9f6e-2360-49dc-bf60-0a544c990079-kube-api-access-rhr6b\") pod \"auto-csr-approver-29537012-99k6z\" (UID: \"440d9f6e-2360-49dc-bf60-0a544c990079\") " pod="openshift-infra/auto-csr-approver-29537012-99k6z" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.425178 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rhr6b\" (UniqueName: \"kubernetes.io/projected/440d9f6e-2360-49dc-bf60-0a544c990079-kube-api-access-rhr6b\") pod \"auto-csr-approver-29537012-99k6z\" (UID: \"440d9f6e-2360-49dc-bf60-0a544c990079\") " pod="openshift-infra/auto-csr-approver-29537012-99k6z" Feb 27 19:32:00 crc kubenswrapper[4708]: I0227 19:32:00.479913 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-99k6z" Feb 27 19:32:01 crc kubenswrapper[4708]: I0227 19:32:01.075467 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-99k6z"] Feb 27 19:32:01 crc kubenswrapper[4708]: I0227 19:32:01.235321 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537012-99k6z" event={"ID":"440d9f6e-2360-49dc-bf60-0a544c990079","Type":"ContainerStarted","Data":"e183e8127081c4a3c9c7654a0f20b6a281a3a9244c9319dd1fd4249f50c6ac82"} Feb 27 19:32:03 crc kubenswrapper[4708]: I0227 19:32:03.228711 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:32:03 crc kubenswrapper[4708]: E0227 19:32:03.229490 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.357965 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zhx8c"] Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.361134 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.378775 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zhx8c"] Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.535519 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8vv\" (UniqueName: \"kubernetes.io/projected/87f28690-279d-4b39-a329-221d10842d68-kube-api-access-sd8vv\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.535883 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-utilities\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.535917 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-catalog-content\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.637729 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-utilities\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.637788 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8vv\" (UniqueName: \"kubernetes.io/projected/87f28690-279d-4b39-a329-221d10842d68-kube-api-access-sd8vv\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.637823 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-catalog-content\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.638222 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-utilities\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.639784 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-catalog-content\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.662813 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8vv\" (UniqueName: \"kubernetes.io/projected/87f28690-279d-4b39-a329-221d10842d68-kube-api-access-sd8vv\") pod \"certified-operators-zhx8c\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:17 crc kubenswrapper[4708]: I0227 19:32:17.704821 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:18 crc kubenswrapper[4708]: I0227 19:32:18.228463 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:32:18 crc kubenswrapper[4708]: E0227 19:32:18.229378 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:32:18 crc kubenswrapper[4708]: I0227 19:32:18.361484 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zhx8c"] Feb 27 19:32:18 crc kubenswrapper[4708]: I0227 19:32:18.453816 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8c" event={"ID":"87f28690-279d-4b39-a329-221d10842d68","Type":"ContainerStarted","Data":"1012c670d328e77d6f7a36294b0e4f341447764dd54731449aad42bc6560902b"} Feb 27 19:32:19 crc kubenswrapper[4708]: I0227 19:32:19.465927 4708 generic.go:334] "Generic (PLEG): container finished" podID="87f28690-279d-4b39-a329-221d10842d68" containerID="d3b21fdb642ad6a2bdb45529a10a6fd3d87023cb390df996859cbd94ca6b4e4a" exitCode=0 Feb 27 19:32:19 crc kubenswrapper[4708]: I0227 19:32:19.466021 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8c" event={"ID":"87f28690-279d-4b39-a329-221d10842d68","Type":"ContainerDied","Data":"d3b21fdb642ad6a2bdb45529a10a6fd3d87023cb390df996859cbd94ca6b4e4a"} Feb 27 19:32:20 crc kubenswrapper[4708]: I0227 19:32:20.476688 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8c" event={"ID":"87f28690-279d-4b39-a329-221d10842d68","Type":"ContainerStarted","Data":"9f1a7cd17687e543cba973e070a3317a61f9e5abd8fdb44a3a7baf08e15c1553"} Feb 27 19:32:21 crc kubenswrapper[4708]: I0227 19:32:21.488327 4708 generic.go:334] "Generic (PLEG): container finished" podID="87f28690-279d-4b39-a329-221d10842d68" containerID="9f1a7cd17687e543cba973e070a3317a61f9e5abd8fdb44a3a7baf08e15c1553" exitCode=0 Feb 27 19:32:21 crc kubenswrapper[4708]: I0227 19:32:21.488371 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8c" event={"ID":"87f28690-279d-4b39-a329-221d10842d68","Type":"ContainerDied","Data":"9f1a7cd17687e543cba973e070a3317a61f9e5abd8fdb44a3a7baf08e15c1553"} Feb 27 19:32:22 crc kubenswrapper[4708]: I0227 19:32:22.502259 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8c" event={"ID":"87f28690-279d-4b39-a329-221d10842d68","Type":"ContainerStarted","Data":"d8c7725b4047d8e527be4556ba054f818e046f75c616090b48ec261be2db903e"} Feb 27 19:32:22 crc kubenswrapper[4708]: I0227 19:32:22.533301 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zhx8c" podStartSLOduration=3.128334774 podStartE2EDuration="5.533280534s" podCreationTimestamp="2026-02-27 19:32:17 +0000 UTC" firstStartedPulling="2026-02-27 19:32:19.468508423 +0000 UTC m=+9537.984306010" lastFinishedPulling="2026-02-27 19:32:21.873454183 +0000 UTC m=+9540.389251770" observedRunningTime="2026-02-27 19:32:22.515092251 +0000 UTC m=+9541.030889868" watchObservedRunningTime="2026-02-27 19:32:22.533280534 +0000 UTC m=+9541.049078141" Feb 27 19:32:27 crc kubenswrapper[4708]: I0227 19:32:27.705719 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:27 crc kubenswrapper[4708]: I0227 19:32:27.706367 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:27 crc kubenswrapper[4708]: I0227 19:32:27.775962 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:28 crc kubenswrapper[4708]: I0227 19:32:28.614778 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:29 crc kubenswrapper[4708]: I0227 19:32:29.228831 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:32:29 crc kubenswrapper[4708]: E0227 19:32:29.232216 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:32:31 crc kubenswrapper[4708]: I0227 19:32:31.338221 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zhx8c"] Feb 27 19:32:31 crc kubenswrapper[4708]: I0227 19:32:31.338749 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zhx8c" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="registry-server" containerID="cri-o://d8c7725b4047d8e527be4556ba054f818e046f75c616090b48ec261be2db903e" gracePeriod=2 Feb 27 19:32:31 crc kubenswrapper[4708]: I0227 19:32:31.604660 4708 generic.go:334] "Generic (PLEG): container finished" podID="87f28690-279d-4b39-a329-221d10842d68" containerID="d8c7725b4047d8e527be4556ba054f818e046f75c616090b48ec261be2db903e" exitCode=0 Feb 27 19:32:31 crc kubenswrapper[4708]: I0227 19:32:31.604743 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8c" event={"ID":"87f28690-279d-4b39-a329-221d10842d68","Type":"ContainerDied","Data":"d8c7725b4047d8e527be4556ba054f818e046f75c616090b48ec261be2db903e"} Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.426381 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.522688 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-catalog-content\") pod \"87f28690-279d-4b39-a329-221d10842d68\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.522829 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-utilities\") pod \"87f28690-279d-4b39-a329-221d10842d68\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.523328 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sd8vv\" (UniqueName: \"kubernetes.io/projected/87f28690-279d-4b39-a329-221d10842d68-kube-api-access-sd8vv\") pod \"87f28690-279d-4b39-a329-221d10842d68\" (UID: \"87f28690-279d-4b39-a329-221d10842d68\") " Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.525538 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-utilities" (OuterVolumeSpecName: "utilities") pod "87f28690-279d-4b39-a329-221d10842d68" (UID: "87f28690-279d-4b39-a329-221d10842d68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.551144 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87f28690-279d-4b39-a329-221d10842d68-kube-api-access-sd8vv" (OuterVolumeSpecName: "kube-api-access-sd8vv") pod "87f28690-279d-4b39-a329-221d10842d68" (UID: "87f28690-279d-4b39-a329-221d10842d68"). InnerVolumeSpecName "kube-api-access-sd8vv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:32:32 crc kubenswrapper[4708]: E0227 19:32:32.581992 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:32:32 crc kubenswrapper[4708]: E0227 19:32:32.582123 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:32:32 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:32:32 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537012-99k6z_openshift-infra(440d9f6e-2360-49dc-bf60-0a544c990079): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:32:32 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:32:32 crc kubenswrapper[4708]: E0227 19:32:32.583310 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.606678 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87f28690-279d-4b39-a329-221d10842d68" (UID: "87f28690-279d-4b39-a329-221d10842d68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.620110 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zhx8c" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.620576 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zhx8c" event={"ID":"87f28690-279d-4b39-a329-221d10842d68","Type":"ContainerDied","Data":"1012c670d328e77d6f7a36294b0e4f341447764dd54731449aad42bc6560902b"} Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.620626 4708 scope.go:117] "RemoveContainer" containerID="d8c7725b4047d8e527be4556ba054f818e046f75c616090b48ec261be2db903e" Feb 27 19:32:32 crc kubenswrapper[4708]: E0227 19:32:32.622001 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.625423 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sd8vv\" (UniqueName: \"kubernetes.io/projected/87f28690-279d-4b39-a329-221d10842d68-kube-api-access-sd8vv\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.625445 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.625472 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87f28690-279d-4b39-a329-221d10842d68-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.662798 4708 scope.go:117] "RemoveContainer" containerID="9f1a7cd17687e543cba973e070a3317a61f9e5abd8fdb44a3a7baf08e15c1553" Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.668880 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zhx8c"] Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.677570 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zhx8c"] Feb 27 19:32:32 crc kubenswrapper[4708]: I0227 19:32:32.689180 4708 scope.go:117] "RemoveContainer" containerID="d3b21fdb642ad6a2bdb45529a10a6fd3d87023cb390df996859cbd94ca6b4e4a" Feb 27 19:32:34 crc kubenswrapper[4708]: I0227 19:32:34.240097 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87f28690-279d-4b39-a329-221d10842d68" path="/var/lib/kubelet/pods/87f28690-279d-4b39-a329-221d10842d68/volumes" Feb 27 19:32:43 crc kubenswrapper[4708]: I0227 19:32:43.230019 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:32:43 crc kubenswrapper[4708]: E0227 19:32:43.230822 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:32:43 crc kubenswrapper[4708]: I0227 19:32:43.976156 4708 generic.go:334] "Generic (PLEG): container finished" podID="0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" containerID="0b0038c5c9e9cd04cdf6be268bc527dd307d6e5cbb2ae43f65cdf0ec7a71a222" exitCode=0 Feb 27 19:32:43 crc kubenswrapper[4708]: I0227 19:32:43.976222 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" event={"ID":"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98","Type":"ContainerDied","Data":"0b0038c5c9e9cd04cdf6be268bc527dd307d6e5cbb2ae43f65cdf0ec7a71a222"} Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.084952 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.127589 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-kgrz4"] Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.136560 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-kgrz4"] Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.272597 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-host\") pod \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.272682 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-host" (OuterVolumeSpecName: "host") pod "0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" (UID: "0b7aef62-1a98-4a3e-99cd-f1f481a6dc98"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.272716 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gnxrs\" (UniqueName: \"kubernetes.io/projected/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-kube-api-access-gnxrs\") pod \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\" (UID: \"0b7aef62-1a98-4a3e-99cd-f1f481a6dc98\") " Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.273511 4708 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-host\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.279257 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-kube-api-access-gnxrs" (OuterVolumeSpecName: "kube-api-access-gnxrs") pod "0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" (UID: "0b7aef62-1a98-4a3e-99cd-f1f481a6dc98"). InnerVolumeSpecName "kube-api-access-gnxrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.375637 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gnxrs\" (UniqueName: \"kubernetes.io/projected/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98-kube-api-access-gnxrs\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.994655 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e7d3388cec17a9684df059dd26b7eeb159422b2b18cfcb58b5e387900c6aa5a2" Feb 27 19:32:45 crc kubenswrapper[4708]: I0227 19:32:45.994735 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-kgrz4" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.241682 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" path="/var/lib/kubelet/pods/0b7aef62-1a98-4a3e-99cd-f1f481a6dc98/volumes" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.306674 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-f442k"] Feb 27 19:32:46 crc kubenswrapper[4708]: E0227 19:32:46.307127 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="extract-content" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.307146 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="extract-content" Feb 27 19:32:46 crc kubenswrapper[4708]: E0227 19:32:46.307163 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" containerName="container-00" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.307169 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" containerName="container-00" Feb 27 19:32:46 crc kubenswrapper[4708]: E0227 19:32:46.307188 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="registry-server" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.307196 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="registry-server" Feb 27 19:32:46 crc kubenswrapper[4708]: E0227 19:32:46.307212 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="extract-utilities" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.307217 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="extract-utilities" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.307388 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b7aef62-1a98-4a3e-99cd-f1f481a6dc98" containerName="container-00" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.307413 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="87f28690-279d-4b39-a329-221d10842d68" containerName="registry-server" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.308068 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.396576 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtf7g\" (UniqueName: \"kubernetes.io/projected/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-kube-api-access-vtf7g\") pod \"crc-debug-f442k\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.396623 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-host\") pod \"crc-debug-f442k\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.499439 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtf7g\" (UniqueName: \"kubernetes.io/projected/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-kube-api-access-vtf7g\") pod \"crc-debug-f442k\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.499512 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-host\") pod \"crc-debug-f442k\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.499762 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-host\") pod \"crc-debug-f442k\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.523703 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtf7g\" (UniqueName: \"kubernetes.io/projected/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-kube-api-access-vtf7g\") pod \"crc-debug-f442k\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:46 crc kubenswrapper[4708]: I0227 19:32:46.626929 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:47 crc kubenswrapper[4708]: I0227 19:32:47.005827 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-f442k" event={"ID":"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc","Type":"ContainerStarted","Data":"c2e6231a7d4930b3a3fbacbc9f55c4e3ce56fac474277d48c1fd50c890fb9469"} Feb 27 19:32:47 crc kubenswrapper[4708]: I0227 19:32:47.005906 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-f442k" event={"ID":"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc","Type":"ContainerStarted","Data":"404fd6c3adb7301d0407e3b6a76b40aeba4e768f49b2feafb76ab9faf7c02bd5"} Feb 27 19:32:47 crc kubenswrapper[4708]: I0227 19:32:47.024169 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-b2jlw/crc-debug-f442k" podStartSLOduration=1.024146897 podStartE2EDuration="1.024146897s" podCreationTimestamp="2026-02-27 19:32:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-02-27 19:32:47.017726546 +0000 UTC m=+9565.533524133" watchObservedRunningTime="2026-02-27 19:32:47.024146897 +0000 UTC m=+9565.539944484" Feb 27 19:32:48 crc kubenswrapper[4708]: I0227 19:32:48.017050 4708 generic.go:334] "Generic (PLEG): container finished" podID="97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc" containerID="c2e6231a7d4930b3a3fbacbc9f55c4e3ce56fac474277d48c1fd50c890fb9469" exitCode=0 Feb 27 19:32:48 crc kubenswrapper[4708]: I0227 19:32:48.017107 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-f442k" event={"ID":"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc","Type":"ContainerDied","Data":"c2e6231a7d4930b3a3fbacbc9f55c4e3ce56fac474277d48c1fd50c890fb9469"} Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.183173 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.256550 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtf7g\" (UniqueName: \"kubernetes.io/projected/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-kube-api-access-vtf7g\") pod \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.256753 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-host\") pod \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\" (UID: \"97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc\") " Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.257465 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-host" (OuterVolumeSpecName: "host") pod "97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc" (UID: "97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.263387 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-kube-api-access-vtf7g" (OuterVolumeSpecName: "kube-api-access-vtf7g") pod "97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc" (UID: "97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc"). InnerVolumeSpecName "kube-api-access-vtf7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.360195 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtf7g\" (UniqueName: \"kubernetes.io/projected/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-kube-api-access-vtf7g\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.360232 4708 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc-host\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.608467 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-f442k"] Feb 27 19:32:49 crc kubenswrapper[4708]: I0227 19:32:49.621665 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-f442k"] Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.043031 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="404fd6c3adb7301d0407e3b6a76b40aeba4e768f49b2feafb76ab9faf7c02bd5" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.043079 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-f442k" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.239755 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc" path="/var/lib/kubelet/pods/97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc/volumes" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.769262 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-dbvhf"] Feb 27 19:32:50 crc kubenswrapper[4708]: E0227 19:32:50.769707 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc" containerName="container-00" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.769728 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc" containerName="container-00" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.770025 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="97f1a73c-63e0-4ae4-9ebe-09b4cc6194fc" containerName="container-00" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.770716 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.897726 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0424d8a-5c20-43d6-8c94-9dc095c888c0-host\") pod \"crc-debug-dbvhf\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:50 crc kubenswrapper[4708]: I0227 19:32:50.898007 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls79j\" (UniqueName: \"kubernetes.io/projected/c0424d8a-5c20-43d6-8c94-9dc095c888c0-kube-api-access-ls79j\") pod \"crc-debug-dbvhf\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:51 crc kubenswrapper[4708]: I0227 19:32:51.000145 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls79j\" (UniqueName: \"kubernetes.io/projected/c0424d8a-5c20-43d6-8c94-9dc095c888c0-kube-api-access-ls79j\") pod \"crc-debug-dbvhf\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:51 crc kubenswrapper[4708]: I0227 19:32:51.000439 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0424d8a-5c20-43d6-8c94-9dc095c888c0-host\") pod \"crc-debug-dbvhf\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:51 crc kubenswrapper[4708]: I0227 19:32:51.000606 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0424d8a-5c20-43d6-8c94-9dc095c888c0-host\") pod \"crc-debug-dbvhf\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:51 crc kubenswrapper[4708]: I0227 19:32:51.022872 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls79j\" (UniqueName: \"kubernetes.io/projected/c0424d8a-5c20-43d6-8c94-9dc095c888c0-kube-api-access-ls79j\") pod \"crc-debug-dbvhf\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:51 crc kubenswrapper[4708]: I0227 19:32:51.090040 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:51 crc kubenswrapper[4708]: W0227 19:32:51.117642 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0424d8a_5c20_43d6_8c94_9dc095c888c0.slice/crio-06b441ea98cbbe23a7e87c6147bc525db3fc03f3e189b795db9acc819944ad2e WatchSource:0}: Error finding container 06b441ea98cbbe23a7e87c6147bc525db3fc03f3e189b795db9acc819944ad2e: Status 404 returned error can't find the container with id 06b441ea98cbbe23a7e87c6147bc525db3fc03f3e189b795db9acc819944ad2e Feb 27 19:32:52 crc kubenswrapper[4708]: I0227 19:32:52.064537 4708 generic.go:334] "Generic (PLEG): container finished" podID="c0424d8a-5c20-43d6-8c94-9dc095c888c0" containerID="b10bc290a6c9466ba5f182a8106fe212ffdcd365cacd664a1137075bac475185" exitCode=0 Feb 27 19:32:52 crc kubenswrapper[4708]: I0227 19:32:52.064594 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" event={"ID":"c0424d8a-5c20-43d6-8c94-9dc095c888c0","Type":"ContainerDied","Data":"b10bc290a6c9466ba5f182a8106fe212ffdcd365cacd664a1137075bac475185"} Feb 27 19:32:52 crc kubenswrapper[4708]: I0227 19:32:52.065175 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" event={"ID":"c0424d8a-5c20-43d6-8c94-9dc095c888c0","Type":"ContainerStarted","Data":"06b441ea98cbbe23a7e87c6147bc525db3fc03f3e189b795db9acc819944ad2e"} Feb 27 19:32:52 crc kubenswrapper[4708]: I0227 19:32:52.129286 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-dbvhf"] Feb 27 19:32:52 crc kubenswrapper[4708]: I0227 19:32:52.147219 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b2jlw/crc-debug-dbvhf"] Feb 27 19:32:53 crc kubenswrapper[4708]: I0227 19:32:53.192199 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:53 crc kubenswrapper[4708]: I0227 19:32:53.343619 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0424d8a-5c20-43d6-8c94-9dc095c888c0-host\") pod \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " Feb 27 19:32:53 crc kubenswrapper[4708]: I0227 19:32:53.343772 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c0424d8a-5c20-43d6-8c94-9dc095c888c0-host" (OuterVolumeSpecName: "host") pod "c0424d8a-5c20-43d6-8c94-9dc095c888c0" (UID: "c0424d8a-5c20-43d6-8c94-9dc095c888c0"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 27 19:32:53 crc kubenswrapper[4708]: I0227 19:32:53.343869 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls79j\" (UniqueName: \"kubernetes.io/projected/c0424d8a-5c20-43d6-8c94-9dc095c888c0-kube-api-access-ls79j\") pod \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\" (UID: \"c0424d8a-5c20-43d6-8c94-9dc095c888c0\") " Feb 27 19:32:53 crc kubenswrapper[4708]: I0227 19:32:53.344599 4708 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/c0424d8a-5c20-43d6-8c94-9dc095c888c0-host\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:53 crc kubenswrapper[4708]: I0227 19:32:53.350078 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0424d8a-5c20-43d6-8c94-9dc095c888c0-kube-api-access-ls79j" (OuterVolumeSpecName: "kube-api-access-ls79j") pod "c0424d8a-5c20-43d6-8c94-9dc095c888c0" (UID: "c0424d8a-5c20-43d6-8c94-9dc095c888c0"). InnerVolumeSpecName "kube-api-access-ls79j". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:32:53 crc kubenswrapper[4708]: I0227 19:32:53.446027 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls79j\" (UniqueName: \"kubernetes.io/projected/c0424d8a-5c20-43d6-8c94-9dc095c888c0-kube-api-access-ls79j\") on node \"crc\" DevicePath \"\"" Feb 27 19:32:54 crc kubenswrapper[4708]: I0227 19:32:54.085553 4708 scope.go:117] "RemoveContainer" containerID="b10bc290a6c9466ba5f182a8106fe212ffdcd365cacd664a1137075bac475185" Feb 27 19:32:54 crc kubenswrapper[4708]: I0227 19:32:54.085593 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/crc-debug-dbvhf" Feb 27 19:32:54 crc kubenswrapper[4708]: I0227 19:32:54.244284 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c0424d8a-5c20-43d6-8c94-9dc095c888c0" path="/var/lib/kubelet/pods/c0424d8a-5c20-43d6-8c94-9dc095c888c0/volumes" Feb 27 19:32:54 crc kubenswrapper[4708]: E0227 19:32:54.278146 4708 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc0424d8a_5c20_43d6_8c94_9dc095c888c0.slice\": RecentStats: unable to find data in memory cache]" Feb 27 19:32:56 crc kubenswrapper[4708]: I0227 19:32:56.228993 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:32:56 crc kubenswrapper[4708]: E0227 19:32:56.229420 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:33:07 crc kubenswrapper[4708]: I0227 19:33:07.228639 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:33:07 crc kubenswrapper[4708]: E0227 19:33:07.230819 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:33:21 crc kubenswrapper[4708]: I0227 19:33:21.228485 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:33:21 crc kubenswrapper[4708]: E0227 19:33:21.229225 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:33:23 crc kubenswrapper[4708]: I0227 19:33:23.539236 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_6cc07076-e637-443a-85c1-7b72beeb6cc7/init-config-reloader/0.log" Feb 27 19:33:23 crc kubenswrapper[4708]: I0227 19:33:23.783901 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_6cc07076-e637-443a-85c1-7b72beeb6cc7/init-config-reloader/0.log" Feb 27 19:33:23 crc kubenswrapper[4708]: I0227 19:33:23.784361 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_6cc07076-e637-443a-85c1-7b72beeb6cc7/alertmanager/0.log" Feb 27 19:33:23 crc kubenswrapper[4708]: I0227 19:33:23.801554 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_alertmanager-metric-storage-0_6cc07076-e637-443a-85c1-7b72beeb6cc7/config-reloader/0.log" Feb 27 19:33:24 crc kubenswrapper[4708]: I0227 19:33:24.198029 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6cc57cfd-rj6nd_46ded50a-aa4c-47e7-8768-82bb22fff933/barbican-api-log/0.log" Feb 27 19:33:24 crc kubenswrapper[4708]: I0227 19:33:24.292446 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-7c6cc57cfd-rj6nd_46ded50a-aa4c-47e7-8768-82bb22fff933/barbican-api/0.log" Feb 27 19:33:24 crc kubenswrapper[4708]: I0227 19:33:24.414840 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-777c49d4fd-pzrvc_dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd/barbican-keystone-listener/0.log" Feb 27 19:33:24 crc kubenswrapper[4708]: I0227 19:33:24.546238 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b688b6d95-78fb7_317368d9-8188-4337-9a05-e504c8e90b84/barbican-worker/0.log" Feb 27 19:33:24 crc kubenswrapper[4708]: I0227 19:33:24.547635 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-777c49d4fd-pzrvc_dd5a8cf7-6eb5-4dde-88e5-0b66e0f142bd/barbican-keystone-listener-log/0.log" Feb 27 19:33:24 crc kubenswrapper[4708]: I0227 19:33:24.691433 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5b688b6d95-78fb7_317368d9-8188-4337-9a05-e504c8e90b84/barbican-worker-log/0.log" Feb 27 19:33:24 crc kubenswrapper[4708]: I0227 19:33:24.807613 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-5gpv5_14f3f808-a956-4da2-a9b6-b355ff4e2726/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.098569 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1ce78ce-446b-4a42-bd4f-59fe2264e7c2/ceilometer-central-agent/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.101259 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1ce78ce-446b-4a42-bd4f-59fe2264e7c2/ceilometer-notification-agent/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.143612 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1ce78ce-446b-4a42-bd4f-59fe2264e7c2/sg-core/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.155984 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_c1ce78ce-446b-4a42-bd4f-59fe2264e7c2/proxy-httpd/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.466389 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_395e91ca-8629-4557-bcb7-f84d7f61b61d/cinder-api-log/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.595562 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_395e91ca-8629-4557-bcb7-f84d7f61b61d/cinder-api/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.686262 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8a8a6d87-beea-472a-a795-a8fc5daf0bde/cinder-scheduler/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.792764 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_8a8a6d87-beea-472a-a795-a8fc5daf0bde/probe/0.log" Feb 27 19:33:25 crc kubenswrapper[4708]: I0227 19:33:25.951756 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_74bd4940-0cc6-4cc2-a593-60b6656899cb/cloudkitty-api-log/0.log" Feb 27 19:33:26 crc kubenswrapper[4708]: I0227 19:33:26.012292 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-api-0_74bd4940-0cc6-4cc2-a593-60b6656899cb/cloudkitty-api/0.log" Feb 27 19:33:26 crc kubenswrapper[4708]: I0227 19:33:26.072923 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-compactor-0_cb80bc89-9a5d-4ade-89d7-99d39732a907/loki-compactor/0.log" Feb 27 19:33:26 crc kubenswrapper[4708]: I0227 19:33:26.227546 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-distributor-585d9bcbc-xjzz8_e9768cf3-76f8-46d6-bfc4-8536e88e92a3/loki-distributor/0.log" Feb 27 19:33:26 crc kubenswrapper[4708]: I0227 19:33:26.308024 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-fn48d_191b9cdf-6626-4c04-bc5e-c8585af9940d/gateway/0.log" Feb 27 19:33:26 crc kubenswrapper[4708]: I0227 19:33:26.499013 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-gateway-7f8685b49f-hbxzw_1f8805bc-c67e-435a-8734-6a8e4f845e9f/gateway/0.log" Feb 27 19:33:27 crc kubenswrapper[4708]: I0227 19:33:27.110385 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-ingester-0_238aef54-b0dd-495b-a5f8-66cc43b12088/loki-ingester/0.log" Feb 27 19:33:27 crc kubenswrapper[4708]: I0227 19:33:27.121418 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-index-gateway-0_c56ea2d3-2905-47bd-b819-41705a3b858f/loki-index-gateway/0.log" Feb 27 19:33:27 crc kubenswrapper[4708]: I0227 19:33:27.666629 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-query-frontend-67bb4dfcd8-9ms26_0d4a0e43-6399-4a19-97a2-6ecfa156222c/loki-query-frontend/0.log" Feb 27 19:33:28 crc kubenswrapper[4708]: I0227 19:33:28.023108 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-lokistack-querier-58c84b5844-wb4dk_0b7415cb-a36a-4035-bcfa-1454faaa3e95/loki-querier/0.log" Feb 27 19:33:28 crc kubenswrapper[4708]: I0227 19:33:28.027988 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-mdgdd_d57eeb05-c84e-45a1-8e3a-5c54cd498d30/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:28 crc kubenswrapper[4708]: I0227 19:33:28.357428 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-8zf22_41b80060-486e-4ab2-872a-cfbbdf39b405/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:28 crc kubenswrapper[4708]: I0227 19:33:28.496625 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-csc9f_c1565c10-ac46-4e06-aaef-7eafc155b4cd/init/0.log" Feb 27 19:33:28 crc kubenswrapper[4708]: I0227 19:33:28.750582 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-csc9f_c1565c10-ac46-4e06-aaef-7eafc155b4cd/init/0.log" Feb 27 19:33:28 crc kubenswrapper[4708]: I0227 19:33:28.782788 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_download-cache-edpm-deployment-openstack-edpm-ipam-hclxw_378dc842-8c5d-4882-ab1f-3f89e1ed250b/download-cache-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:28 crc kubenswrapper[4708]: I0227 19:33:28.805444 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-85f64749dc-csc9f_c1565c10-ac46-4e06-aaef-7eafc155b4cd/dnsmasq-dns/0.log" Feb 27 19:33:29 crc kubenswrapper[4708]: I0227 19:33:29.218454 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1/glance-httpd/0.log" Feb 27 19:33:29 crc kubenswrapper[4708]: I0227 19:33:29.232095 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_a15bcd78-5c20-4ab0-ab64-e7b7e65cf4d1/glance-log/0.log" Feb 27 19:33:29 crc kubenswrapper[4708]: I0227 19:33:29.471318 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1d218377-bee6-44e0-a6f7-ef62a33366e0/glance-log/0.log" Feb 27 19:33:29 crc kubenswrapper[4708]: I0227 19:33:29.483211 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_1d218377-bee6-44e0-a6f7-ef62a33366e0/glance-httpd/0.log" Feb 27 19:33:29 crc kubenswrapper[4708]: I0227 19:33:29.569940 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-wxf22_e6eb203f-b8bd-4a02-8c47-ed0d1490b341/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:29 crc kubenswrapper[4708]: I0227 19:33:29.842490 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-w625j_4fdb3925-ad04-4a50-82e2-2f2362945df4/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:30 crc kubenswrapper[4708]: I0227 19:33:30.061237 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29536921-w8pnk_d3b26d5e-d907-420b-b4be-bdb12fd169e7/keystone-cron/0.log" Feb 27 19:33:30 crc kubenswrapper[4708]: I0227 19:33:30.347119 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-597b655d8b-dmxbr_8d31c043-7a1b-4030-aa89-ccf8a23a766b/keystone-api/0.log" Feb 27 19:33:30 crc kubenswrapper[4708]: I0227 19:33:30.426510 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29536981-c7wzx_478e58b4-a3ac-4474-88b8-5d289430de52/keystone-cron/0.log" Feb 27 19:33:30 crc kubenswrapper[4708]: I0227 19:33:30.862162 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_e8569f7c-7242-437f-80b5-0146d75c19c5/kube-state-metrics/0.log" Feb 27 19:33:30 crc kubenswrapper[4708]: I0227 19:33:30.942659 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-gqvds_8d8413dc-ed60-4d4e-a1ea-92d3f46de85f/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:31 crc kubenswrapper[4708]: I0227 19:33:31.375630 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-547f9bd6cc-98rqm_b9aa13d2-83ae-4a00-821d-97fc5592ec7e/neutron-api/0.log" Feb 27 19:33:31 crc kubenswrapper[4708]: I0227 19:33:31.602830 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-547f9bd6cc-98rqm_b9aa13d2-83ae-4a00-821d-97fc5592ec7e/neutron-httpd/0.log" Feb 27 19:33:31 crc kubenswrapper[4708]: I0227 19:33:31.672621 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-m4stn_7fc04a87-78e4-4c7e-b1d1-2127e4b9fffc/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:32 crc kubenswrapper[4708]: I0227 19:33:32.137110 4708 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-6cffdcc987-z48fb" podUID="6e9387a8-c996-4095-8d52-d73b5d6d1d7e" containerName="proxy-httpd" probeResult="failure" output="HTTP probe failed with statuscode: 502" Feb 27 19:33:32 crc kubenswrapper[4708]: I0227 19:33:32.363406 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1b557ce2-14db-4777-927b-045eccbac5e5/nova-api-log/0.log" Feb 27 19:33:32 crc kubenswrapper[4708]: I0227 19:33:32.535216 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_d573ab41-daa8-4853-9698-d55d5e7664df/nova-cell0-conductor-conductor/0.log" Feb 27 19:33:33 crc kubenswrapper[4708]: I0227 19:33:33.024991 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_b4a0dd32-9089-4ca1-8814-b78372b68724/nova-cell1-conductor-conductor/0.log" Feb 27 19:33:33 crc kubenswrapper[4708]: I0227 19:33:33.511355 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_f6aea8fe-6682-4d69-90d7-173b5d089d5f/nova-cell1-novncproxy-novncproxy/0.log" Feb 27 19:33:33 crc kubenswrapper[4708]: I0227 19:33:33.629098 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-edpm-deployment-openstack-edpm-ipam-w575s_991979f1-f211-41ce-b112-fa555006dfec/nova-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:33 crc kubenswrapper[4708]: I0227 19:33:33.633701 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_1b557ce2-14db-4777-927b-045eccbac5e5/nova-api-api/0.log" Feb 27 19:33:34 crc kubenswrapper[4708]: I0227 19:33:34.312156 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d441abe7-688c-4023-b44a-badbf0e2365b/nova-metadata-log/0.log" Feb 27 19:33:34 crc kubenswrapper[4708]: I0227 19:33:34.725665 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_d1d2c8fb-e050-4235-b072-367cb5dd24d6/nova-scheduler-scheduler/0.log" Feb 27 19:33:34 crc kubenswrapper[4708]: I0227 19:33:34.983740 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4c3332de-a21c-4552-a037-c5665b4c0927/mysql-bootstrap/0.log" Feb 27 19:33:35 crc kubenswrapper[4708]: I0227 19:33:35.233101 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:33:35 crc kubenswrapper[4708]: E0227 19:33:35.233328 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:33:35 crc kubenswrapper[4708]: I0227 19:33:35.328552 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4c3332de-a21c-4552-a037-c5665b4c0927/mysql-bootstrap/0.log" Feb 27 19:33:35 crc kubenswrapper[4708]: I0227 19:33:35.360119 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_4c3332de-a21c-4552-a037-c5665b4c0927/galera/0.log" Feb 27 19:33:35 crc kubenswrapper[4708]: I0227 19:33:35.606893 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6f6f6892-d9d6-4f71-bc65-8e47c15bddc1/mysql-bootstrap/0.log" Feb 27 19:33:35 crc kubenswrapper[4708]: I0227 19:33:35.852698 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6f6f6892-d9d6-4f71-bc65-8e47c15bddc1/mysql-bootstrap/0.log" Feb 27 19:33:35 crc kubenswrapper[4708]: I0227 19:33:35.854758 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_6f6f6892-d9d6-4f71-bc65-8e47c15bddc1/galera/0.log" Feb 27 19:33:36 crc kubenswrapper[4708]: I0227 19:33:36.051214 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_12701673-c2a0-4e8a-b906-b7e61a49c224/openstackclient/0.log" Feb 27 19:33:36 crc kubenswrapper[4708]: I0227 19:33:36.385242 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-6zlsq_2410b28c-0b9c-4da0-826a-bcbbab63a292/ovn-controller/0.log" Feb 27 19:33:36 crc kubenswrapper[4708]: I0227 19:33:36.589556 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-b7fvz_7ca9fb3f-72d8-44f0-a4c5-ef78ff437d0d/openstack-network-exporter/0.log" Feb 27 19:33:37 crc kubenswrapper[4708]: I0227 19:33:37.120424 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k2qzb_cdfec2dc-369d-405a-a7c4-95c4b5a08d8a/ovsdb-server-init/0.log" Feb 27 19:33:37 crc kubenswrapper[4708]: I0227 19:33:37.333017 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k2qzb_cdfec2dc-369d-405a-a7c4-95c4b5a08d8a/ovsdb-server-init/0.log" Feb 27 19:33:37 crc kubenswrapper[4708]: I0227 19:33:37.346924 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k2qzb_cdfec2dc-369d-405a-a7c4-95c4b5a08d8a/ovs-vswitchd/0.log" Feb 27 19:33:37 crc kubenswrapper[4708]: I0227 19:33:37.637011 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-k2qzb_cdfec2dc-369d-405a-a7c4-95c4b5a08d8a/ovsdb-server/0.log" Feb 27 19:33:37 crc kubenswrapper[4708]: I0227 19:33:37.888667 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-jzmv2_e8a95a5c-facb-48fb-85e3-6f440a9e84b2/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:38 crc kubenswrapper[4708]: I0227 19:33:38.070494 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d3d398d5-587b-48e8-b90b-a3e511311982/openstack-network-exporter/0.log" Feb 27 19:33:38 crc kubenswrapper[4708]: I0227 19:33:38.156137 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_d3d398d5-587b-48e8-b90b-a3e511311982/ovn-northd/0.log" Feb 27 19:33:38 crc kubenswrapper[4708]: I0227 19:33:38.371657 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e2314c35-5338-4db2-a705-53cbc737f9a1/openstack-network-exporter/0.log" Feb 27 19:33:38 crc kubenswrapper[4708]: I0227 19:33:38.678118 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_e2314c35-5338-4db2-a705-53cbc737f9a1/ovsdbserver-nb/0.log" Feb 27 19:33:38 crc kubenswrapper[4708]: I0227 19:33:38.871814 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9f2235b7-8f1a-4510-8ca8-ed784bf1aec1/openstack-network-exporter/0.log" Feb 27 19:33:38 crc kubenswrapper[4708]: I0227 19:33:38.939488 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_9f2235b7-8f1a-4510-8ca8-ed784bf1aec1/ovsdbserver-sb/0.log" Feb 27 19:33:39 crc kubenswrapper[4708]: I0227 19:33:39.455931 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-54c5f87dbb-t77v4_b872f276-2f96-401e-b918-f031b919338a/placement-api/0.log" Feb 27 19:33:39 crc kubenswrapper[4708]: I0227 19:33:39.599819 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-54c5f87dbb-t77v4_b872f276-2f96-401e-b918-f031b919338a/placement-log/0.log" Feb 27 19:33:39 crc kubenswrapper[4708]: I0227 19:33:39.811080 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_d441abe7-688c-4023-b44a-badbf0e2365b/nova-metadata-metadata/0.log" Feb 27 19:33:39 crc kubenswrapper[4708]: I0227 19:33:39.922721 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5107d3e0-ea93-4d89-b36c-f726b481e0e0/init-config-reloader/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.140967 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5107d3e0-ea93-4d89-b36c-f726b481e0e0/init-config-reloader/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.153372 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5107d3e0-ea93-4d89-b36c-f726b481e0e0/config-reloader/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.178318 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cloudkitty-proc-0_caa871a6-96e7-4f11-8769-0fc2464b8f71/cloudkitty-proc/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.195904 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5107d3e0-ea93-4d89-b36c-f726b481e0e0/prometheus/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.566697 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7ac4a3d3-0b3a-4fc5-8f98-806ca5810475/setup-container/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.644183 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_prometheus-metric-storage-0_5107d3e0-ea93-4d89-b36c-f726b481e0e0/thanos-sidecar/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.920127 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7ac4a3d3-0b3a-4fc5-8f98-806ca5810475/setup-container/0.log" Feb 27 19:33:40 crc kubenswrapper[4708]: I0227 19:33:40.932627 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_7ac4a3d3-0b3a-4fc5-8f98-806ca5810475/rabbitmq/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.005916 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_866e4edf-2f8a-4c4b-9caf-54ad03011231/setup-container/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.241040 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_866e4edf-2f8a-4c4b-9caf-54ad03011231/setup-container/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.300292 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_866e4edf-2f8a-4c4b-9caf-54ad03011231/rabbitmq/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.382211 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-n9lkm_2b2d8b39-89e2-4743-910d-c5471b6a327c/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.508284 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_redhat-edpm-deployment-openstack-edpm-ipam-xpxdz_366fdafa-6776-4ab6-82b3-be300efc15de/redhat-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.642078 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-hcchj_7bde186b-7de3-419b-b5fe-58d72f7d1a9e/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.771785 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-tpt6m_4f284073-5b25-4831-86e7-6b9165c34d73/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:41 crc kubenswrapper[4708]: I0227 19:33:41.952353 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-mddsz_8885c7ac-9dbc-4dba-89c1-ea98a342af30/ssh-known-hosts-edpm-deployment/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.200131 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6cffdcc987-z48fb_6e9387a8-c996-4095-8d52-d73b5d6d1d7e/proxy-server/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.435884 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-ring-rebalance-wq4dg_487e829b-b6b1-4c03-8c90-f35a10aee7a2/swift-ring-rebalance/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.467379 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-proxy-6cffdcc987-z48fb_6e9387a8-c996-4095-8d52-d73b5d6d1d7e/proxy-httpd/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.511481 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/account-auditor/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.680812 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/account-reaper/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.797738 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/account-server/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.812176 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/account-replicator/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.876840 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/container-auditor/0.log" Feb 27 19:33:42 crc kubenswrapper[4708]: I0227 19:33:42.982186 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/container-replicator/0.log" Feb 27 19:33:43 crc kubenswrapper[4708]: I0227 19:33:43.067897 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/container-server/0.log" Feb 27 19:33:43 crc kubenswrapper[4708]: I0227 19:33:43.108227 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/object-auditor/0.log" Feb 27 19:33:43 crc kubenswrapper[4708]: I0227 19:33:43.118230 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/container-updater/0.log" Feb 27 19:33:43 crc kubenswrapper[4708]: I0227 19:33:43.348201 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/object-server/0.log" Feb 27 19:33:43 crc kubenswrapper[4708]: I0227 19:33:43.360089 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/object-expirer/0.log" Feb 27 19:33:43 crc kubenswrapper[4708]: I0227 19:33:43.774228 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/object-updater/0.log" Feb 27 19:33:43 crc kubenswrapper[4708]: I0227 19:33:43.807045 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/object-replicator/0.log" Feb 27 19:33:44 crc kubenswrapper[4708]: I0227 19:33:44.023155 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/rsync/0.log" Feb 27 19:33:44 crc kubenswrapper[4708]: I0227 19:33:44.110068 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_swift-storage-0_e8a41f59-1fee-425c-a42a-de40caa66c0f/swift-recon-cron/0.log" Feb 27 19:33:44 crc kubenswrapper[4708]: I0227 19:33:44.243435 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_telemetry-edpm-deployment-openstack-edpm-ipam-4dl7v_f46db3bc-f11b-4634-9916-10c0094d3d5f/telemetry-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:44 crc kubenswrapper[4708]: I0227 19:33:44.395171 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_a2707740-9be6-47c5-996c-43c292ad9758/tempest-tests-tempest-tests-runner/0.log" Feb 27 19:33:44 crc kubenswrapper[4708]: I0227 19:33:44.465335 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_40ab3d43-846c-4b15-93e9-3b63e179fa73/test-operator-logs-container/0.log" Feb 27 19:33:44 crc kubenswrapper[4708]: I0227 19:33:44.635806 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-lbw99_3c1995e2-730c-4f54-a505-cd3794371a7a/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Feb 27 19:33:49 crc kubenswrapper[4708]: I0227 19:33:49.228341 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:33:49 crc kubenswrapper[4708]: E0227 19:33:49.229147 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:33:56 crc kubenswrapper[4708]: I0227 19:33:56.636700 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_0c436943-14ee-474c-a393-c067fd0dec97/memcached/0.log" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.161844 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537014-9285m"] Feb 27 19:34:00 crc kubenswrapper[4708]: E0227 19:34:00.162628 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0424d8a-5c20-43d6-8c94-9dc095c888c0" containerName="container-00" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.162641 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0424d8a-5c20-43d6-8c94-9dc095c888c0" containerName="container-00" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.162841 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0424d8a-5c20-43d6-8c94-9dc095c888c0" containerName="container-00" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.163593 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-9285m" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.175869 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-9285m"] Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.260154 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8rd4\" (UniqueName: \"kubernetes.io/projected/2b0ad072-9864-40a7-abdb-c3ac8b7255a0-kube-api-access-n8rd4\") pod \"auto-csr-approver-29537014-9285m\" (UID: \"2b0ad072-9864-40a7-abdb-c3ac8b7255a0\") " pod="openshift-infra/auto-csr-approver-29537014-9285m" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.363058 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8rd4\" (UniqueName: \"kubernetes.io/projected/2b0ad072-9864-40a7-abdb-c3ac8b7255a0-kube-api-access-n8rd4\") pod \"auto-csr-approver-29537014-9285m\" (UID: \"2b0ad072-9864-40a7-abdb-c3ac8b7255a0\") " pod="openshift-infra/auto-csr-approver-29537014-9285m" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.385561 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8rd4\" (UniqueName: \"kubernetes.io/projected/2b0ad072-9864-40a7-abdb-c3ac8b7255a0-kube-api-access-n8rd4\") pod \"auto-csr-approver-29537014-9285m\" (UID: \"2b0ad072-9864-40a7-abdb-c3ac8b7255a0\") " pod="openshift-infra/auto-csr-approver-29537014-9285m" Feb 27 19:34:00 crc kubenswrapper[4708]: I0227 19:34:00.507201 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-9285m" Feb 27 19:34:01 crc kubenswrapper[4708]: I0227 19:34:01.040998 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-9285m"] Feb 27 19:34:01 crc kubenswrapper[4708]: I0227 19:34:01.963700 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-9285m" event={"ID":"2b0ad072-9864-40a7-abdb-c3ac8b7255a0","Type":"ContainerStarted","Data":"054690196a0dd59f14a518a5e5e04ab5e9c4565bf99c69c92fd557941b5ed557"} Feb 27 19:34:02 crc kubenswrapper[4708]: I0227 19:34:02.235789 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:34:02 crc kubenswrapper[4708]: E0227 19:34:02.236342 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:34:02 crc kubenswrapper[4708]: I0227 19:34:02.977138 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-9285m" event={"ID":"2b0ad072-9864-40a7-abdb-c3ac8b7255a0","Type":"ContainerStarted","Data":"fdf975746d4bf490a18cf765b2c64405efc37ae8c57746113d99ca2ddf623d9f"} Feb 27 19:34:02 crc kubenswrapper[4708]: I0227 19:34:02.994600 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29537014-9285m" podStartSLOduration=2.141870798 podStartE2EDuration="2.994581677s" podCreationTimestamp="2026-02-27 19:34:00 +0000 UTC" firstStartedPulling="2026-02-27 19:34:01.047023108 +0000 UTC m=+9639.562820695" lastFinishedPulling="2026-02-27 19:34:01.899733987 +0000 UTC m=+9640.415531574" observedRunningTime="2026-02-27 19:34:02.992276332 +0000 UTC m=+9641.508073929" watchObservedRunningTime="2026-02-27 19:34:02.994581677 +0000 UTC m=+9641.510379264" Feb 27 19:34:03 crc kubenswrapper[4708]: I0227 19:34:03.989641 4708 generic.go:334] "Generic (PLEG): container finished" podID="2b0ad072-9864-40a7-abdb-c3ac8b7255a0" containerID="fdf975746d4bf490a18cf765b2c64405efc37ae8c57746113d99ca2ddf623d9f" exitCode=0 Feb 27 19:34:03 crc kubenswrapper[4708]: I0227 19:34:03.989741 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-9285m" event={"ID":"2b0ad072-9864-40a7-abdb-c3ac8b7255a0","Type":"ContainerDied","Data":"fdf975746d4bf490a18cf765b2c64405efc37ae8c57746113d99ca2ddf623d9f"} Feb 27 19:34:05 crc kubenswrapper[4708]: I0227 19:34:05.646641 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-9285m" Feb 27 19:34:05 crc kubenswrapper[4708]: I0227 19:34:05.815876 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8rd4\" (UniqueName: \"kubernetes.io/projected/2b0ad072-9864-40a7-abdb-c3ac8b7255a0-kube-api-access-n8rd4\") pod \"2b0ad072-9864-40a7-abdb-c3ac8b7255a0\" (UID: \"2b0ad072-9864-40a7-abdb-c3ac8b7255a0\") " Feb 27 19:34:05 crc kubenswrapper[4708]: I0227 19:34:05.824113 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b0ad072-9864-40a7-abdb-c3ac8b7255a0-kube-api-access-n8rd4" (OuterVolumeSpecName: "kube-api-access-n8rd4") pod "2b0ad072-9864-40a7-abdb-c3ac8b7255a0" (UID: "2b0ad072-9864-40a7-abdb-c3ac8b7255a0"). InnerVolumeSpecName "kube-api-access-n8rd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:34:05 crc kubenswrapper[4708]: I0227 19:34:05.918599 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8rd4\" (UniqueName: \"kubernetes.io/projected/2b0ad072-9864-40a7-abdb-c3ac8b7255a0-kube-api-access-n8rd4\") on node \"crc\" DevicePath \"\"" Feb 27 19:34:06 crc kubenswrapper[4708]: I0227 19:34:06.012035 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537014-9285m" event={"ID":"2b0ad072-9864-40a7-abdb-c3ac8b7255a0","Type":"ContainerDied","Data":"054690196a0dd59f14a518a5e5e04ab5e9c4565bf99c69c92fd557941b5ed557"} Feb 27 19:34:06 crc kubenswrapper[4708]: I0227 19:34:06.012085 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="054690196a0dd59f14a518a5e5e04ab5e9c4565bf99c69c92fd557941b5ed557" Feb 27 19:34:06 crc kubenswrapper[4708]: I0227 19:34:06.012152 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537014-9285m" Feb 27 19:34:06 crc kubenswrapper[4708]: I0227 19:34:06.718917 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-wsz82"] Feb 27 19:34:06 crc kubenswrapper[4708]: I0227 19:34:06.731976 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537006-wsz82"] Feb 27 19:34:08 crc kubenswrapper[4708]: I0227 19:34:08.247013 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03b7dcf3-b9de-4111-bfdf-c872d8f34b03" path="/var/lib/kubelet/pods/03b7dcf3-b9de-4111-bfdf-c872d8f34b03/volumes" Feb 27 19:34:13 crc kubenswrapper[4708]: I0227 19:34:13.228828 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:34:13 crc kubenswrapper[4708]: E0227 19:34:13.229592 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:34:19 crc kubenswrapper[4708]: I0227 19:34:19.364932 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g_18f2082e-b7e1-4045-9853-b790e42cbe82/util/0.log" Feb 27 19:34:19 crc kubenswrapper[4708]: I0227 19:34:19.632715 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g_18f2082e-b7e1-4045-9853-b790e42cbe82/util/0.log" Feb 27 19:34:19 crc kubenswrapper[4708]: I0227 19:34:19.666145 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g_18f2082e-b7e1-4045-9853-b790e42cbe82/pull/0.log" Feb 27 19:34:19 crc kubenswrapper[4708]: I0227 19:34:19.683383 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g_18f2082e-b7e1-4045-9853-b790e42cbe82/pull/0.log" Feb 27 19:34:19 crc kubenswrapper[4708]: I0227 19:34:19.872666 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g_18f2082e-b7e1-4045-9853-b790e42cbe82/pull/0.log" Feb 27 19:34:19 crc kubenswrapper[4708]: I0227 19:34:19.889707 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g_18f2082e-b7e1-4045-9853-b790e42cbe82/extract/0.log" Feb 27 19:34:19 crc kubenswrapper[4708]: I0227 19:34:19.914016 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_b6ee5ccf91bed128b849bf9222b256b2cc51d1ae4139e53f135eaeabdc2jh2g_18f2082e-b7e1-4045-9853-b790e42cbe82/util/0.log" Feb 27 19:34:20 crc kubenswrapper[4708]: I0227 19:34:20.436996 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-5d87c9d997-wffwh_038010da-affb-4db1-88e9-67e8ee1304cc/manager/0.log" Feb 27 19:34:20 crc kubenswrapper[4708]: I0227 19:34:20.864089 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-64db6967f8-5nrb9_5ea0106c-7f8b-493f-847f-da8b5ee33395/manager/0.log" Feb 27 19:34:20 crc kubenswrapper[4708]: I0227 19:34:20.972176 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6db6876945-2hw5z_3fd10334-e172-4f8f-8f20-9d447937468f/manager/0.log" Feb 27 19:34:21 crc kubenswrapper[4708]: I0227 19:34:21.091540 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-cf99c678f-c4pj6_b2819715-8c70-4b6f-8199-8e122f5b03e4/manager/0.log" Feb 27 19:34:21 crc kubenswrapper[4708]: I0227 19:34:21.320751 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-78bc7f9bd9-wv64j_9ff0a3b0-a6e8-4f03-bbca-b04e516cfaff/manager/0.log" Feb 27 19:34:21 crc kubenswrapper[4708]: I0227 19:34:21.734400 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-545456dc4-wp777_f3ca9720-d51d-4c81-9aa0-3c21947be164/manager/0.log" Feb 27 19:34:21 crc kubenswrapper[4708]: I0227 19:34:21.834744 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-f7fcc58b9-sxmk5_dde28522-3138-4c50-b3c5-1e26d61b96e1/manager/0.log" Feb 27 19:34:22 crc kubenswrapper[4708]: I0227 19:34:22.058572 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-55ffd4876b-n66z2_f2e64742-9a09-4f5a-b8d5-ec938e7ac27b/manager/0.log" Feb 27 19:34:22 crc kubenswrapper[4708]: I0227 19:34:22.395584 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-67d996989d-kj8hq_1ade7297-180b-4c42-85b7-5edaf33dd0b4/manager/0.log" Feb 27 19:34:22 crc kubenswrapper[4708]: I0227 19:34:22.765707 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-556b8b874-mcvwl_df5608da-0dbc-4335-b221-feb484afd410/manager/0.log" Feb 27 19:34:22 crc kubenswrapper[4708]: I0227 19:34:22.834759 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-55d77d7b5c-4kwbb_45efdeea-5e44-44b0-b9d0-e2cc8c441168/manager/0.log" Feb 27 19:34:22 crc kubenswrapper[4708]: I0227 19:34:22.929563 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-54688575f-d29bm_f52bc8c9-30b0-4f44-8f5c-f2af4c7176d5/manager/0.log" Feb 27 19:34:23 crc kubenswrapper[4708]: I0227 19:34:23.096665 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-74b6b5dc96-vcjxj_156803c8-e795-452c-9244-b93c2b3af9e7/manager/0.log" Feb 27 19:34:23 crc kubenswrapper[4708]: I0227 19:34:23.187486 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-5d86c7ddb7-dqxzg_bf787ac7-afe7-4705-a740-80d2f0d60054/manager/0.log" Feb 27 19:34:23 crc kubenswrapper[4708]: I0227 19:34:23.308522 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-7c6767dc9c4nbq2_c0bf6b0d-d70d-4498-a61f-cd7354439357/manager/0.log" Feb 27 19:34:23 crc kubenswrapper[4708]: I0227 19:34:23.496867 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-7fb98c5bdd-sptst_c23c26b6-9e2d-46bf-9b7b-7e942361e3bc/operator/0.log" Feb 27 19:34:23 crc kubenswrapper[4708]: I0227 19:34:23.895336 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-zk98t_779d3205-e15f-4447-9e95-256243b04cf3/registry-server/0.log" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.180454 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-75684d597f-rj2g4_5cb187f0-85c4-48ef-90fb-6a6c896188e5/manager/0.log" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.229994 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:34:24 crc kubenswrapper[4708]: E0227 19:34:24.230442 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.264357 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-648564c9fc-vq95w_03b225c1-aa9b-4f83-b786-1c9c299ef456/manager/0.log" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.311320 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-wzdmr_025b2ef1-3f2f-413f-a6a0-c5d34cd27447/operator/0.log" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.503999 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-9b9ff9f4d-8jdst_ae129c1e-ae9f-4cef-93fd-b186bf0eb275/manager/0.log" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.689943 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-b89df8bf4-c7qtl_8e7ab31e-da8a-4ae8-a4c1-940312416cc3/manager/0.log" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.870493 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-55b5ff4dbb-jfh6m_9cf6d78e-38dd-4875-8fcc-6b34b93c9924/manager/0.log" Feb 27 19:34:24 crc kubenswrapper[4708]: I0227 19:34:24.991353 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-5c646dc97-69twh_037ffc6c-63a3-4848-9b83-e68944940401/manager/0.log" Feb 27 19:34:25 crc kubenswrapper[4708]: I0227 19:34:25.018672 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-bccc79885-sjbv4_7a28ceb0-14d8-4fa0-a7ca-3921efcaba86/manager/0.log" Feb 27 19:34:32 crc kubenswrapper[4708]: I0227 19:34:32.972872 4708 scope.go:117] "RemoveContainer" containerID="638ac760b601ba05990116642e22b573ebf37e07030fd57447bcd772ece69c08" Feb 27 19:34:39 crc kubenswrapper[4708]: I0227 19:34:39.228547 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:34:39 crc kubenswrapper[4708]: E0227 19:34:39.229216 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:34:47 crc kubenswrapper[4708]: I0227 19:34:47.308883 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-jn4h4_c050b374-23f2-4a98-af19-fee47a82a879/control-plane-machine-set-operator/0.log" Feb 27 19:34:47 crc kubenswrapper[4708]: I0227 19:34:47.464230 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s45vs_26d12a6e-d830-4357-b372-9163d663448f/kube-rbac-proxy/0.log" Feb 27 19:34:47 crc kubenswrapper[4708]: I0227 19:34:47.585290 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-s45vs_26d12a6e-d830-4357-b372-9163d663448f/machine-api-operator/0.log" Feb 27 19:34:49 crc kubenswrapper[4708]: E0227 19:34:49.232146 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:34:49 crc kubenswrapper[4708]: E0227 19:34:49.232801 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:34:49 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:34:49 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537012-99k6z_openshift-infra(440d9f6e-2360-49dc-bf60-0a544c990079): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:34:49 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:34:49 crc kubenswrapper[4708]: E0227 19:34:49.233962 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:34:51 crc kubenswrapper[4708]: I0227 19:34:51.228706 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:34:51 crc kubenswrapper[4708]: E0227 19:34:51.229027 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:35:01 crc kubenswrapper[4708]: E0227 19:35:01.232372 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:35:01 crc kubenswrapper[4708]: I0227 19:35:01.634783 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858654f9db-8qwnr_29594fd1-8f6b-4b90-aad4-0ef65bb098b3/cert-manager-controller/0.log" Feb 27 19:35:01 crc kubenswrapper[4708]: I0227 19:35:01.771911 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-cf98fcc89-glgjb_628c406f-d7a1-471d-a1b5-56413469baf9/cert-manager-cainjector/0.log" Feb 27 19:35:01 crc kubenswrapper[4708]: I0227 19:35:01.988296 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-687f57d79b-vcm49_aa734fc8-e63b-4877-bc29-774dfdbc8768/cert-manager-webhook/0.log" Feb 27 19:35:06 crc kubenswrapper[4708]: I0227 19:35:06.229148 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:35:06 crc kubenswrapper[4708]: I0227 19:35:06.630498 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"081b1c80d435493dc455ba22a5f8780a28b8b1cb9921b9014300ff6f29437e96"} Feb 27 19:35:13 crc kubenswrapper[4708]: E0227 19:35:13.982703 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:35:13 crc kubenswrapper[4708]: E0227 19:35:13.983349 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:35:13 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:35:13 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537012-99k6z_openshift-infra(440d9f6e-2360-49dc-bf60-0a544c990079): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:35:13 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:35:13 crc kubenswrapper[4708]: E0227 19:35:13.984551 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:35:15 crc kubenswrapper[4708]: I0227 19:35:15.572470 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-5dcbbd79cf-vhqgc_d0f73455-2b50-4d77-8943-a75587af8b9d/nmstate-console-plugin/0.log" Feb 27 19:35:15 crc kubenswrapper[4708]: I0227 19:35:15.791031 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-q6bvp_13879980-37e2-49a9-a9ba-056ba7fb5698/nmstate-handler/0.log" Feb 27 19:35:15 crc kubenswrapper[4708]: I0227 19:35:15.825637 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-mjd67_fdbd97e5-232b-4c09-b936-7258fc72a153/kube-rbac-proxy/0.log" Feb 27 19:35:15 crc kubenswrapper[4708]: I0227 19:35:15.987599 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-69594cc75-mjd67_fdbd97e5-232b-4c09-b936-7258fc72a153/nmstate-metrics/0.log" Feb 27 19:35:16 crc kubenswrapper[4708]: I0227 19:35:16.048516 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-75c5dccd6c-2mqdb_2b0a02a3-1871-4bf5-a292-e2bb406be9b1/nmstate-operator/0.log" Feb 27 19:35:16 crc kubenswrapper[4708]: I0227 19:35:16.222525 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-786f45cff4-4mk88_6c61d3bb-a5e6-4206-a47a-9d6fcba04da4/nmstate-webhook/0.log" Feb 27 19:35:25 crc kubenswrapper[4708]: E0227 19:35:25.230952 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:35:31 crc kubenswrapper[4708]: I0227 19:35:31.152782 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5545944799-2z66r_64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37/kube-rbac-proxy/0.log" Feb 27 19:35:31 crc kubenswrapper[4708]: I0227 19:35:31.163493 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5545944799-2z66r_64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37/manager/0.log" Feb 27 19:35:38 crc kubenswrapper[4708]: E0227 19:35:38.239542 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:35:46 crc kubenswrapper[4708]: I0227 19:35:46.706373 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mnthm_33badcb1-2622-423f-afe6-482b92342910/prometheus-operator/0.log" Feb 27 19:35:46 crc kubenswrapper[4708]: I0227 19:35:46.991752 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_3b0277f7-658b-4897-b034-9aab6cacc59e/prometheus-operator-admission-webhook/0.log" Feb 27 19:35:47 crc kubenswrapper[4708]: I0227 19:35:47.077203 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_e46fb234-1f2b-4217-b76b-0e2900d525da/prometheus-operator-admission-webhook/0.log" Feb 27 19:35:47 crc kubenswrapper[4708]: I0227 19:35:47.653478 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-x7wsw_f03daf2a-7ba1-454e-a2fd-dd2e12631679/operator/0.log" Feb 27 19:35:47 crc kubenswrapper[4708]: I0227 19:35:47.711845 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2qs44_5c087cb8-7024-4186-9e20-5620cdb2fd9a/perses-operator/0.log" Feb 27 19:35:51 crc kubenswrapper[4708]: E0227 19:35:51.354476 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.181181 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537016-6mjlb"] Feb 27 19:36:00 crc kubenswrapper[4708]: E0227 19:36:00.182267 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b0ad072-9864-40a7-abdb-c3ac8b7255a0" containerName="oc" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.182286 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b0ad072-9864-40a7-abdb-c3ac8b7255a0" containerName="oc" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.182571 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b0ad072-9864-40a7-abdb-c3ac8b7255a0" containerName="oc" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.183525 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.198551 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537016-6mjlb"] Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.231567 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74822\" (UniqueName: \"kubernetes.io/projected/d63c566d-7b0f-4580-9d8f-17077155f4f4-kube-api-access-74822\") pod \"auto-csr-approver-29537016-6mjlb\" (UID: \"d63c566d-7b0f-4580-9d8f-17077155f4f4\") " pod="openshift-infra/auto-csr-approver-29537016-6mjlb" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.333569 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74822\" (UniqueName: \"kubernetes.io/projected/d63c566d-7b0f-4580-9d8f-17077155f4f4-kube-api-access-74822\") pod \"auto-csr-approver-29537016-6mjlb\" (UID: \"d63c566d-7b0f-4580-9d8f-17077155f4f4\") " pod="openshift-infra/auto-csr-approver-29537016-6mjlb" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.366983 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74822\" (UniqueName: \"kubernetes.io/projected/d63c566d-7b0f-4580-9d8f-17077155f4f4-kube-api-access-74822\") pod \"auto-csr-approver-29537016-6mjlb\" (UID: \"d63c566d-7b0f-4580-9d8f-17077155f4f4\") " pod="openshift-infra/auto-csr-approver-29537016-6mjlb" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.515559 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" Feb 27 19:36:00 crc kubenswrapper[4708]: I0227 19:36:00.977657 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537016-6mjlb"] Feb 27 19:36:01 crc kubenswrapper[4708]: I0227 19:36:01.144834 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" event={"ID":"d63c566d-7b0f-4580-9d8f-17077155f4f4","Type":"ContainerStarted","Data":"601807b233fe37af803d6a1ac6b4b954ca23db805fb249439e3e8847046de885"} Feb 27 19:36:02 crc kubenswrapper[4708]: E0227 19:36:02.010046 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:36:02 crc kubenswrapper[4708]: E0227 19:36:02.010190 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:36:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:36:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74822,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537016-6mjlb_openshift-infra(d63c566d-7b0f-4580-9d8f-17077155f4f4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:36:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:36:02 crc kubenswrapper[4708]: E0227 19:36:02.011361 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:36:02 crc kubenswrapper[4708]: E0227 19:36:02.155391 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:36:04 crc kubenswrapper[4708]: E0227 19:36:04.782907 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:36:04 crc kubenswrapper[4708]: E0227 19:36:04.783573 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:36:04 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:36:04 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537012-99k6z_openshift-infra(440d9f6e-2360-49dc-bf60-0a544c990079): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:36:04 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:36:04 crc kubenswrapper[4708]: E0227 19:36:04.788268 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:36:05 crc kubenswrapper[4708]: I0227 19:36:05.341166 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-lpk95_79f40a52-2cad-44c5-8698-3738361bcafa/kube-rbac-proxy/0.log" Feb 27 19:36:05 crc kubenswrapper[4708]: I0227 19:36:05.580318 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-frr-files/0.log" Feb 27 19:36:05 crc kubenswrapper[4708]: I0227 19:36:05.582183 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-86ddb6bd46-lpk95_79f40a52-2cad-44c5-8698-3738361bcafa/controller/0.log" Feb 27 19:36:05 crc kubenswrapper[4708]: I0227 19:36:05.807944 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-frr-files/0.log" Feb 27 19:36:05 crc kubenswrapper[4708]: I0227 19:36:05.819734 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-reloader/0.log" Feb 27 19:36:05 crc kubenswrapper[4708]: I0227 19:36:05.835629 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-reloader/0.log" Feb 27 19:36:05 crc kubenswrapper[4708]: I0227 19:36:05.913135 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-metrics/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.081123 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-reloader/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.094111 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-frr-files/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.170606 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-metrics/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.251207 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-metrics/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.376432 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-metrics/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.412612 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-frr-files/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.440944 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/cp-reloader/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.474776 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/controller/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.648936 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/frr-metrics/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.649224 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/kube-rbac-proxy/0.log" Feb 27 19:36:06 crc kubenswrapper[4708]: I0227 19:36:06.685891 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/kube-rbac-proxy-frr/0.log" Feb 27 19:36:07 crc kubenswrapper[4708]: I0227 19:36:07.010352 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7f989f654f-cnmrz_6c250b4a-ba60-4846-a259-3a5f04f9142a/frr-k8s-webhook-server/0.log" Feb 27 19:36:07 crc kubenswrapper[4708]: I0227 19:36:07.013043 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/reloader/0.log" Feb 27 19:36:07 crc kubenswrapper[4708]: I0227 19:36:07.470258 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-7d79b99f67-pbln9_38a1edba-b7f3-4051-bb7f-9f7c5ecea249/manager/0.log" Feb 27 19:36:07 crc kubenswrapper[4708]: I0227 19:36:07.639367 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-79f55df48d-fgptj_c990cc98-0533-4790-8569-2c5b1f52f353/webhook-server/0.log" Feb 27 19:36:07 crc kubenswrapper[4708]: I0227 19:36:07.723368 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-26sk8_684605c3-e5a8-4755-953e-84a8a4ab3e2e/kube-rbac-proxy/0.log" Feb 27 19:36:08 crc kubenswrapper[4708]: I0227 19:36:08.600691 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-26sk8_684605c3-e5a8-4755-953e-84a8a4ab3e2e/speaker/0.log" Feb 27 19:36:09 crc kubenswrapper[4708]: I0227 19:36:09.482335 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-k8mc9_ab09f69e-3ca1-4192-b224-59fd8ce9ad0c/frr/0.log" Feb 27 19:36:16 crc kubenswrapper[4708]: E0227 19:36:16.230869 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:36:23 crc kubenswrapper[4708]: I0227 19:36:23.744883 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9_9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63/util/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.076898 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9_9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63/util/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.080069 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9_9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63/pull/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.137530 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9_9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63/pull/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.341387 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9_9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63/pull/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.355013 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9_9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63/util/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.402877 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_0e94e7566f739476ccec6d16e58de3f1c434cfa3060893f90f3e473a82tb6b9_9dcc0b3b-90f6-4964-bf41-ef3f87ec6b63/extract/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.636347 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x_bba75f99-dc6c-4c6a-ae97-e636ed291513/util/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.790674 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x_bba75f99-dc6c-4c6a-ae97-e636ed291513/pull/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.800032 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x_bba75f99-dc6c-4c6a-ae97-e636ed291513/util/0.log" Feb 27 19:36:24 crc kubenswrapper[4708]: I0227 19:36:24.876464 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x_bba75f99-dc6c-4c6a-ae97-e636ed291513/pull/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.066467 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x_bba75f99-dc6c-4c6a-ae97-e636ed291513/pull/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.122079 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x_bba75f99-dc6c-4c6a-ae97-e636ed291513/util/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.155018 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_7f58e735a36f9542d9a3af6ebc3f4824d644ecc313275701c496e86651bvz6x_bba75f99-dc6c-4c6a-ae97-e636ed291513/extract/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.281046 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p_9061e6c9-6752-4d0b-adbc-a10578e633fc/util/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.445953 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p_9061e6c9-6752-4d0b-adbc-a10578e633fc/util/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.484370 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p_9061e6c9-6752-4d0b-adbc-a10578e633fc/pull/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.489092 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p_9061e6c9-6752-4d0b-adbc-a10578e633fc/pull/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.922262 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p_9061e6c9-6752-4d0b-adbc-a10578e633fc/util/0.log" Feb 27 19:36:25 crc kubenswrapper[4708]: I0227 19:36:25.930773 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p_9061e6c9-6752-4d0b-adbc-a10578e633fc/extract/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.063368 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gx98p_9061e6c9-6752-4d0b-adbc-a10578e633fc/pull/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.150022 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq8mz_927d4daf-45f7-48a8-9e25-a47aae1be192/extract-utilities/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.360674 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq8mz_927d4daf-45f7-48a8-9e25-a47aae1be192/extract-content/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.381166 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq8mz_927d4daf-45f7-48a8-9e25-a47aae1be192/extract-content/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.439309 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq8mz_927d4daf-45f7-48a8-9e25-a47aae1be192/extract-utilities/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.612258 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq8mz_927d4daf-45f7-48a8-9e25-a47aae1be192/extract-content/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.628510 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq8mz_927d4daf-45f7-48a8-9e25-a47aae1be192/extract-utilities/0.log" Feb 27 19:36:26 crc kubenswrapper[4708]: I0227 19:36:26.882754 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-22xt4_7917d39c-2ac3-45d4-817d-d0722e37c5a5/extract-utilities/0.log" Feb 27 19:36:27 crc kubenswrapper[4708]: I0227 19:36:27.208320 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-22xt4_7917d39c-2ac3-45d4-817d-d0722e37c5a5/extract-utilities/0.log" Feb 27 19:36:27 crc kubenswrapper[4708]: I0227 19:36:27.214073 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-22xt4_7917d39c-2ac3-45d4-817d-d0722e37c5a5/extract-content/0.log" Feb 27 19:36:27 crc kubenswrapper[4708]: E0227 19:36:27.230721 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:36:27 crc kubenswrapper[4708]: I0227 19:36:27.289650 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-22xt4_7917d39c-2ac3-45d4-817d-d0722e37c5a5/extract-content/0.log" Feb 27 19:36:27 crc kubenswrapper[4708]: I0227 19:36:27.520389 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-22xt4_7917d39c-2ac3-45d4-817d-d0722e37c5a5/extract-utilities/0.log" Feb 27 19:36:27 crc kubenswrapper[4708]: I0227 19:36:27.650266 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-22xt4_7917d39c-2ac3-45d4-817d-d0722e37c5a5/extract-content/0.log" Feb 27 19:36:27 crc kubenswrapper[4708]: I0227 19:36:27.830915 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-tq8mz_927d4daf-45f7-48a8-9e25-a47aae1be192/registry-server/0.log" Feb 27 19:36:27 crc kubenswrapper[4708]: I0227 19:36:27.943105 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh_950927f1-3a77-4b7d-bec6-c669d6c60496/util/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.215332 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh_950927f1-3a77-4b7d-bec6-c669d6c60496/util/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.240275 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh_950927f1-3a77-4b7d-bec6-c669d6c60496/pull/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.301571 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh_950927f1-3a77-4b7d-bec6-c669d6c60496/pull/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.550424 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh_950927f1-3a77-4b7d-bec6-c669d6c60496/util/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.667602 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh_950927f1-3a77-4b7d-bec6-c669d6c60496/pull/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.667997 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_d146760600e43041070ad4572d9c23f31a62e3aefc01a54998863bc5f4t5xdh_950927f1-3a77-4b7d-bec6-c669d6c60496/extract/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.820394 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-22xt4_7917d39c-2ac3-45d4-817d-d0722e37c5a5/registry-server/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.851076 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-wvdjp_0004cd70-bc98-40ac-b46e-54e84ba076d5/marketplace-operator/0.log" Feb 27 19:36:28 crc kubenswrapper[4708]: I0227 19:36:28.888493 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jqppg_5688f78b-5e14-4ff7-83d1-681f44a1273e/extract-utilities/0.log" Feb 27 19:36:29 crc kubenswrapper[4708]: I0227 19:36:29.317043 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jqppg_5688f78b-5e14-4ff7-83d1-681f44a1273e/extract-utilities/0.log" Feb 27 19:36:29 crc kubenswrapper[4708]: I0227 19:36:29.322737 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jqppg_5688f78b-5e14-4ff7-83d1-681f44a1273e/extract-content/0.log" Feb 27 19:36:29 crc kubenswrapper[4708]: I0227 19:36:29.393994 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jqppg_5688f78b-5e14-4ff7-83d1-681f44a1273e/extract-content/0.log" Feb 27 19:36:29 crc kubenswrapper[4708]: I0227 19:36:29.520370 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jqppg_5688f78b-5e14-4ff7-83d1-681f44a1273e/extract-utilities/0.log" Feb 27 19:36:29 crc kubenswrapper[4708]: I0227 19:36:29.581357 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jqppg_5688f78b-5e14-4ff7-83d1-681f44a1273e/extract-content/0.log" Feb 27 19:36:29 crc kubenswrapper[4708]: I0227 19:36:29.709500 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vlld_1159c76c-e814-4a91-a99f-2b18b6758214/extract-utilities/0.log" Feb 27 19:36:29 crc kubenswrapper[4708]: I0227 19:36:29.874203 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-jqppg_5688f78b-5e14-4ff7-83d1-681f44a1273e/registry-server/0.log" Feb 27 19:36:30 crc kubenswrapper[4708]: I0227 19:36:30.086790 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vlld_1159c76c-e814-4a91-a99f-2b18b6758214/extract-utilities/0.log" Feb 27 19:36:30 crc kubenswrapper[4708]: I0227 19:36:30.137961 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vlld_1159c76c-e814-4a91-a99f-2b18b6758214/extract-content/0.log" Feb 27 19:36:30 crc kubenswrapper[4708]: I0227 19:36:30.152572 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vlld_1159c76c-e814-4a91-a99f-2b18b6758214/extract-content/0.log" Feb 27 19:36:30 crc kubenswrapper[4708]: I0227 19:36:30.322517 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vlld_1159c76c-e814-4a91-a99f-2b18b6758214/extract-utilities/0.log" Feb 27 19:36:30 crc kubenswrapper[4708]: I0227 19:36:30.375038 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vlld_1159c76c-e814-4a91-a99f-2b18b6758214/extract-content/0.log" Feb 27 19:36:31 crc kubenswrapper[4708]: I0227 19:36:31.552463 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-4vlld_1159c76c-e814-4a91-a99f-2b18b6758214/registry-server/0.log" Feb 27 19:36:41 crc kubenswrapper[4708]: E0227 19:36:41.231273 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:36:46 crc kubenswrapper[4708]: I0227 19:36:46.284573 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b6f5b6bf-hgnhs_3b0277f7-658b-4897-b034-9aab6cacc59e/prometheus-operator-admission-webhook/0.log" Feb 27 19:36:46 crc kubenswrapper[4708]: I0227 19:36:46.331879 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-mnthm_33badcb1-2622-423f-afe6-482b92342910/prometheus-operator/0.log" Feb 27 19:36:46 crc kubenswrapper[4708]: I0227 19:36:46.409619 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-5b6f5b6bf-q9wzx_e46fb234-1f2b-4217-b76b-0e2900d525da/prometheus-operator-admission-webhook/0.log" Feb 27 19:36:46 crc kubenswrapper[4708]: I0227 19:36:46.628712 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-x7wsw_f03daf2a-7ba1-454e-a2fd-dd2e12631679/operator/0.log" Feb 27 19:36:46 crc kubenswrapper[4708]: I0227 19:36:46.631092 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-2qs44_5c087cb8-7024-4186-9e20-5620cdb2fd9a/perses-operator/0.log" Feb 27 19:36:55 crc kubenswrapper[4708]: E0227 19:36:55.231139 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:37:02 crc kubenswrapper[4708]: I0227 19:37:02.612402 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5545944799-2z66r_64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37/kube-rbac-proxy/0.log" Feb 27 19:37:02 crc kubenswrapper[4708]: I0227 19:37:02.628932 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators-redhat_loki-operator-controller-manager-5545944799-2z66r_64f3ade7-08ee-4d5d-a5b0-f77cb24d8d37/manager/0.log" Feb 27 19:37:09 crc kubenswrapper[4708]: E0227 19:37:09.230319 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:37:10 crc kubenswrapper[4708]: E0227 19:37:10.422159 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:37:10 crc kubenswrapper[4708]: E0227 19:37:10.422505 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:37:10 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:37:10 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74822,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537016-6mjlb_openshift-infra(d63c566d-7b0f-4580-9d8f-17077155f4f4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:37:10 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:37:10 crc kubenswrapper[4708]: E0227 19:37:10.423670 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:37:23 crc kubenswrapper[4708]: E0227 19:37:23.231018 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:37:25 crc kubenswrapper[4708]: E0227 19:37:25.231552 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:37:33 crc kubenswrapper[4708]: I0227 19:37:33.122731 4708 scope.go:117] "RemoveContainer" containerID="0b0038c5c9e9cd04cdf6be268bc527dd307d6e5cbb2ae43f65cdf0ec7a71a222" Feb 27 19:37:35 crc kubenswrapper[4708]: I0227 19:37:35.632214 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:37:35 crc kubenswrapper[4708]: I0227 19:37:35.632546 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:37:37 crc kubenswrapper[4708]: I0227 19:37:37.230984 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:37:39 crc kubenswrapper[4708]: E0227 19:37:39.282564 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:37:39 crc kubenswrapper[4708]: E0227 19:37:39.283175 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:37:39 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:37:39 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-74822,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537016-6mjlb_openshift-infra(d63c566d-7b0f-4580-9d8f-17077155f4f4): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:37:39 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:37:39 crc kubenswrapper[4708]: E0227 19:37:39.284782 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:37:39 crc kubenswrapper[4708]: E0227 19:37:39.563746 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:37:39 crc kubenswrapper[4708]: E0227 19:37:39.563913 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:37:39 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:37:39 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-rhr6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537012-99k6z_openshift-infra(440d9f6e-2360-49dc-bf60-0a544c990079): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:37:39 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:37:39 crc kubenswrapper[4708]: E0227 19:37:39.565102 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:37:52 crc kubenswrapper[4708]: E0227 19:37:52.238784 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:37:53 crc kubenswrapper[4708]: E0227 19:37:53.230440 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:38:00 crc kubenswrapper[4708]: I0227 19:38:00.151590 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537018-kvs2v"] Feb 27 19:38:00 crc kubenswrapper[4708]: I0227 19:38:00.153532 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" Feb 27 19:38:00 crc kubenswrapper[4708]: I0227 19:38:00.165473 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-kvs2v"] Feb 27 19:38:00 crc kubenswrapper[4708]: I0227 19:38:00.336743 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmlkl\" (UniqueName: \"kubernetes.io/projected/a823cd78-f1a2-4fe0-9883-4155276d4872-kube-api-access-mmlkl\") pod \"auto-csr-approver-29537018-kvs2v\" (UID: \"a823cd78-f1a2-4fe0-9883-4155276d4872\") " pod="openshift-infra/auto-csr-approver-29537018-kvs2v" Feb 27 19:38:00 crc kubenswrapper[4708]: I0227 19:38:00.438670 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmlkl\" (UniqueName: \"kubernetes.io/projected/a823cd78-f1a2-4fe0-9883-4155276d4872-kube-api-access-mmlkl\") pod \"auto-csr-approver-29537018-kvs2v\" (UID: \"a823cd78-f1a2-4fe0-9883-4155276d4872\") " pod="openshift-infra/auto-csr-approver-29537018-kvs2v" Feb 27 19:38:00 crc kubenswrapper[4708]: I0227 19:38:00.475743 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmlkl\" (UniqueName: \"kubernetes.io/projected/a823cd78-f1a2-4fe0-9883-4155276d4872-kube-api-access-mmlkl\") pod \"auto-csr-approver-29537018-kvs2v\" (UID: \"a823cd78-f1a2-4fe0-9883-4155276d4872\") " pod="openshift-infra/auto-csr-approver-29537018-kvs2v" Feb 27 19:38:00 crc kubenswrapper[4708]: I0227 19:38:00.772453 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" Feb 27 19:38:01 crc kubenswrapper[4708]: I0227 19:38:01.214900 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537018-kvs2v"] Feb 27 19:38:01 crc kubenswrapper[4708]: I0227 19:38:01.669446 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" event={"ID":"a823cd78-f1a2-4fe0-9883-4155276d4872","Type":"ContainerStarted","Data":"bd1d2b80da3c9b98062ced000cd7e70d0568a28635ee83cd79f6bd04db659ffa"} Feb 27 19:38:02 crc kubenswrapper[4708]: E0227 19:38:02.156280 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:38:02 crc kubenswrapper[4708]: E0227 19:38:02.156725 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:38:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:38:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-mmlkl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537018-kvs2v_openshift-infra(a823cd78-f1a2-4fe0-9883-4155276d4872): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:38:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:38:02 crc kubenswrapper[4708]: E0227 19:38:02.157981 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" podUID="a823cd78-f1a2-4fe0-9883-4155276d4872" Feb 27 19:38:02 crc kubenswrapper[4708]: E0227 19:38:02.680935 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" podUID="a823cd78-f1a2-4fe0-9883-4155276d4872" Feb 27 19:38:04 crc kubenswrapper[4708]: E0227 19:38:04.230759 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:38:05 crc kubenswrapper[4708]: I0227 19:38:05.631883 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:38:05 crc kubenswrapper[4708]: I0227 19:38:05.632205 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:38:06 crc kubenswrapper[4708]: E0227 19:38:06.231301 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:38:17 crc kubenswrapper[4708]: E0227 19:38:17.231102 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:38:17 crc kubenswrapper[4708]: E0227 19:38:17.231106 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" Feb 27 19:38:20 crc kubenswrapper[4708]: I0227 19:38:20.848926 4708 generic.go:334] "Generic (PLEG): container finished" podID="a823cd78-f1a2-4fe0-9883-4155276d4872" containerID="11f4c1da99af9e144a4ea2fc26eff8f6250171eebf810ea74db0ff30c3a868e5" exitCode=0 Feb 27 19:38:20 crc kubenswrapper[4708]: I0227 19:38:20.849004 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" event={"ID":"a823cd78-f1a2-4fe0-9883-4155276d4872","Type":"ContainerDied","Data":"11f4c1da99af9e144a4ea2fc26eff8f6250171eebf810ea74db0ff30c3a868e5"} Feb 27 19:38:22 crc kubenswrapper[4708]: I0227 19:38:22.467475 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" Feb 27 19:38:22 crc kubenswrapper[4708]: I0227 19:38:22.642752 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmlkl\" (UniqueName: \"kubernetes.io/projected/a823cd78-f1a2-4fe0-9883-4155276d4872-kube-api-access-mmlkl\") pod \"a823cd78-f1a2-4fe0-9883-4155276d4872\" (UID: \"a823cd78-f1a2-4fe0-9883-4155276d4872\") " Feb 27 19:38:22 crc kubenswrapper[4708]: I0227 19:38:22.675893 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a823cd78-f1a2-4fe0-9883-4155276d4872-kube-api-access-mmlkl" (OuterVolumeSpecName: "kube-api-access-mmlkl") pod "a823cd78-f1a2-4fe0-9883-4155276d4872" (UID: "a823cd78-f1a2-4fe0-9883-4155276d4872"). InnerVolumeSpecName "kube-api-access-mmlkl". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:22 crc kubenswrapper[4708]: I0227 19:38:22.745244 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mmlkl\" (UniqueName: \"kubernetes.io/projected/a823cd78-f1a2-4fe0-9883-4155276d4872-kube-api-access-mmlkl\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:22 crc kubenswrapper[4708]: I0227 19:38:22.872149 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" event={"ID":"a823cd78-f1a2-4fe0-9883-4155276d4872","Type":"ContainerDied","Data":"bd1d2b80da3c9b98062ced000cd7e70d0568a28635ee83cd79f6bd04db659ffa"} Feb 27 19:38:22 crc kubenswrapper[4708]: I0227 19:38:22.872193 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd1d2b80da3c9b98062ced000cd7e70d0568a28635ee83cd79f6bd04db659ffa" Feb 27 19:38:22 crc kubenswrapper[4708]: I0227 19:38:22.872253 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537018-kvs2v" Feb 27 19:38:23 crc kubenswrapper[4708]: I0227 19:38:23.564751 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-n2666"] Feb 27 19:38:23 crc kubenswrapper[4708]: I0227 19:38:23.577762 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537008-n2666"] Feb 27 19:38:24 crc kubenswrapper[4708]: I0227 19:38:24.245440 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="340b6bcc-ec48-476e-b06c-40b190ee17d3" path="/var/lib/kubelet/pods/340b6bcc-ec48-476e-b06c-40b190ee17d3/volumes" Feb 27 19:38:29 crc kubenswrapper[4708]: E0227 19:38:29.230799 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.009402 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-p5xs5"] Feb 27 19:38:33 crc kubenswrapper[4708]: E0227 19:38:33.010603 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a823cd78-f1a2-4fe0-9883-4155276d4872" containerName="oc" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.010622 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="a823cd78-f1a2-4fe0-9883-4155276d4872" containerName="oc" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.010938 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="a823cd78-f1a2-4fe0-9883-4155276d4872" containerName="oc" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.012552 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.023388 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5xs5"] Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.071388 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-catalog-content\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.071477 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dxcgx\" (UniqueName: \"kubernetes.io/projected/7359d222-3d02-451f-a890-5870c88eb737-kube-api-access-dxcgx\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.071539 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-utilities\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.173700 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-catalog-content\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.173766 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dxcgx\" (UniqueName: \"kubernetes.io/projected/7359d222-3d02-451f-a890-5870c88eb737-kube-api-access-dxcgx\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.173799 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-utilities\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.174386 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-catalog-content\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.174425 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-utilities\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.187224 4708 scope.go:117] "RemoveContainer" containerID="b74b9d4ad77d53aac94d94a38cfa9dd7f938ed696437359ef610353a07825b04" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.205664 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dxcgx\" (UniqueName: \"kubernetes.io/projected/7359d222-3d02-451f-a890-5870c88eb737-kube-api-access-dxcgx\") pod \"redhat-marketplace-p5xs5\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.338066 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:33 crc kubenswrapper[4708]: I0227 19:38:33.907112 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5xs5"] Feb 27 19:38:34 crc kubenswrapper[4708]: I0227 19:38:34.011073 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5xs5" event={"ID":"7359d222-3d02-451f-a890-5870c88eb737","Type":"ContainerStarted","Data":"a2a4051a769dc0a8cfe61e3962cb164ab1dc413218340d9e813e47b2ddef326f"} Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.021294 4708 generic.go:334] "Generic (PLEG): container finished" podID="7359d222-3d02-451f-a890-5870c88eb737" containerID="d2d92c88ca54444413240a4d1d77a623044bfe29533825d8a7906e36cab1d52f" exitCode=0 Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.022699 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5xs5" event={"ID":"7359d222-3d02-451f-a890-5870c88eb737","Type":"ContainerDied","Data":"d2d92c88ca54444413240a4d1d77a623044bfe29533825d8a7906e36cab1d52f"} Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.030405 4708 generic.go:334] "Generic (PLEG): container finished" podID="d63c566d-7b0f-4580-9d8f-17077155f4f4" containerID="71c4f2c89d2628760875d6856154df6cc110dba663568265649e1cf5f377cfc6" exitCode=0 Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.030494 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" event={"ID":"d63c566d-7b0f-4580-9d8f-17077155f4f4","Type":"ContainerDied","Data":"71c4f2c89d2628760875d6856154df6cc110dba663568265649e1cf5f377cfc6"} Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.631698 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.632038 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.632074 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.632710 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"081b1c80d435493dc455ba22a5f8780a28b8b1cb9921b9014300ff6f29437e96"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:38:35 crc kubenswrapper[4708]: I0227 19:38:35.632759 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://081b1c80d435493dc455ba22a5f8780a28b8b1cb9921b9014300ff6f29437e96" gracePeriod=600 Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.061301 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="081b1c80d435493dc455ba22a5f8780a28b8b1cb9921b9014300ff6f29437e96" exitCode=0 Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.061369 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"081b1c80d435493dc455ba22a5f8780a28b8b1cb9921b9014300ff6f29437e96"} Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.061700 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerStarted","Data":"6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96"} Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.061726 4708 scope.go:117] "RemoveContainer" containerID="cc0eef8ce4cf9c53d2d235022f3d516fc4274b5cc931cd48eecc5f610d63d9c4" Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.753616 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.853610 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74822\" (UniqueName: \"kubernetes.io/projected/d63c566d-7b0f-4580-9d8f-17077155f4f4-kube-api-access-74822\") pod \"d63c566d-7b0f-4580-9d8f-17077155f4f4\" (UID: \"d63c566d-7b0f-4580-9d8f-17077155f4f4\") " Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.861080 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d63c566d-7b0f-4580-9d8f-17077155f4f4-kube-api-access-74822" (OuterVolumeSpecName: "kube-api-access-74822") pod "d63c566d-7b0f-4580-9d8f-17077155f4f4" (UID: "d63c566d-7b0f-4580-9d8f-17077155f4f4"). InnerVolumeSpecName "kube-api-access-74822". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:36 crc kubenswrapper[4708]: I0227 19:38:36.956290 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74822\" (UniqueName: \"kubernetes.io/projected/d63c566d-7b0f-4580-9d8f-17077155f4f4-kube-api-access-74822\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:37 crc kubenswrapper[4708]: I0227 19:38:37.073509 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" Feb 27 19:38:37 crc kubenswrapper[4708]: I0227 19:38:37.073868 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537016-6mjlb" event={"ID":"d63c566d-7b0f-4580-9d8f-17077155f4f4","Type":"ContainerDied","Data":"601807b233fe37af803d6a1ac6b4b954ca23db805fb249439e3e8847046de885"} Feb 27 19:38:37 crc kubenswrapper[4708]: I0227 19:38:37.073910 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="601807b233fe37af803d6a1ac6b4b954ca23db805fb249439e3e8847046de885" Feb 27 19:38:37 crc kubenswrapper[4708]: I0227 19:38:37.076063 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5xs5" event={"ID":"7359d222-3d02-451f-a890-5870c88eb737","Type":"ContainerStarted","Data":"cfec18590189ae0a2f75a120200af1a14faa56729e070674b64786534f930dd9"} Feb 27 19:38:37 crc kubenswrapper[4708]: I0227 19:38:37.825221 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-rkbwk"] Feb 27 19:38:37 crc kubenswrapper[4708]: I0227 19:38:37.848016 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537010-rkbwk"] Feb 27 19:38:38 crc kubenswrapper[4708]: I0227 19:38:38.241355 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b" path="/var/lib/kubelet/pods/d2b05b3e-5563-4b29-8c1a-e604f0bf9b3b/volumes" Feb 27 19:38:40 crc kubenswrapper[4708]: I0227 19:38:40.109496 4708 generic.go:334] "Generic (PLEG): container finished" podID="7359d222-3d02-451f-a890-5870c88eb737" containerID="cfec18590189ae0a2f75a120200af1a14faa56729e070674b64786534f930dd9" exitCode=0 Feb 27 19:38:40 crc kubenswrapper[4708]: I0227 19:38:40.109570 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5xs5" event={"ID":"7359d222-3d02-451f-a890-5870c88eb737","Type":"ContainerDied","Data":"cfec18590189ae0a2f75a120200af1a14faa56729e070674b64786534f930dd9"} Feb 27 19:38:41 crc kubenswrapper[4708]: I0227 19:38:41.124062 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5xs5" event={"ID":"7359d222-3d02-451f-a890-5870c88eb737","Type":"ContainerStarted","Data":"16b5db559f1b29a388933d844ba396d1d0d54b22291dbc6169c92e37f5f19e19"} Feb 27 19:38:42 crc kubenswrapper[4708]: I0227 19:38:42.154987 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-p5xs5" podStartSLOduration=4.348565063 podStartE2EDuration="10.154968709s" podCreationTimestamp="2026-02-27 19:38:32 +0000 UTC" firstStartedPulling="2026-02-27 19:38:35.024557473 +0000 UTC m=+9913.540355060" lastFinishedPulling="2026-02-27 19:38:40.830961119 +0000 UTC m=+9919.346758706" observedRunningTime="2026-02-27 19:38:42.150126072 +0000 UTC m=+9920.665923659" watchObservedRunningTime="2026-02-27 19:38:42.154968709 +0000 UTC m=+9920.670766296" Feb 27 19:38:43 crc kubenswrapper[4708]: I0227 19:38:43.338707 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:43 crc kubenswrapper[4708]: I0227 19:38:43.339316 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:45 crc kubenswrapper[4708]: I0227 19:38:45.210796 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-p5xs5" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="registry-server" probeResult="failure" output=< Feb 27 19:38:45 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:38:45 crc kubenswrapper[4708]: > Feb 27 19:38:45 crc kubenswrapper[4708]: E0227 19:38:45.275790 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:38:53 crc kubenswrapper[4708]: I0227 19:38:53.394437 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:53 crc kubenswrapper[4708]: I0227 19:38:53.451256 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:53 crc kubenswrapper[4708]: I0227 19:38:53.635776 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5xs5"] Feb 27 19:38:55 crc kubenswrapper[4708]: I0227 19:38:55.392974 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-p5xs5" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="registry-server" containerID="cri-o://16b5db559f1b29a388933d844ba396d1d0d54b22291dbc6169c92e37f5f19e19" gracePeriod=2 Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.404335 4708 generic.go:334] "Generic (PLEG): container finished" podID="7359d222-3d02-451f-a890-5870c88eb737" containerID="16b5db559f1b29a388933d844ba396d1d0d54b22291dbc6169c92e37f5f19e19" exitCode=0 Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.404406 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5xs5" event={"ID":"7359d222-3d02-451f-a890-5870c88eb737","Type":"ContainerDied","Data":"16b5db559f1b29a388933d844ba396d1d0d54b22291dbc6169c92e37f5f19e19"} Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.404637 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-p5xs5" event={"ID":"7359d222-3d02-451f-a890-5870c88eb737","Type":"ContainerDied","Data":"a2a4051a769dc0a8cfe61e3962cb164ab1dc413218340d9e813e47b2ddef326f"} Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.404652 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2a4051a769dc0a8cfe61e3962cb164ab1dc413218340d9e813e47b2ddef326f" Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.433568 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.559308 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-utilities\") pod \"7359d222-3d02-451f-a890-5870c88eb737\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.559921 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxcgx\" (UniqueName: \"kubernetes.io/projected/7359d222-3d02-451f-a890-5870c88eb737-kube-api-access-dxcgx\") pod \"7359d222-3d02-451f-a890-5870c88eb737\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.560045 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-catalog-content\") pod \"7359d222-3d02-451f-a890-5870c88eb737\" (UID: \"7359d222-3d02-451f-a890-5870c88eb737\") " Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.560446 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-utilities" (OuterVolumeSpecName: "utilities") pod "7359d222-3d02-451f-a890-5870c88eb737" (UID: "7359d222-3d02-451f-a890-5870c88eb737"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.561303 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.567567 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7359d222-3d02-451f-a890-5870c88eb737-kube-api-access-dxcgx" (OuterVolumeSpecName: "kube-api-access-dxcgx") pod "7359d222-3d02-451f-a890-5870c88eb737" (UID: "7359d222-3d02-451f-a890-5870c88eb737"). InnerVolumeSpecName "kube-api-access-dxcgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.597956 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7359d222-3d02-451f-a890-5870c88eb737" (UID: "7359d222-3d02-451f-a890-5870c88eb737"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.663031 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dxcgx\" (UniqueName: \"kubernetes.io/projected/7359d222-3d02-451f-a890-5870c88eb737-kube-api-access-dxcgx\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:56 crc kubenswrapper[4708]: I0227 19:38:56.663073 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7359d222-3d02-451f-a890-5870c88eb737-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:38:57 crc kubenswrapper[4708]: I0227 19:38:57.412697 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-p5xs5" Feb 27 19:38:57 crc kubenswrapper[4708]: I0227 19:38:57.448887 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5xs5"] Feb 27 19:38:57 crc kubenswrapper[4708]: I0227 19:38:57.461810 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-p5xs5"] Feb 27 19:38:58 crc kubenswrapper[4708]: I0227 19:38:58.242528 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7359d222-3d02-451f-a890-5870c88eb737" path="/var/lib/kubelet/pods/7359d222-3d02-451f-a890-5870c88eb737/volumes" Feb 27 19:38:59 crc kubenswrapper[4708]: E0227 19:38:59.231439 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.403324 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5c4xg"] Feb 27 19:39:09 crc kubenswrapper[4708]: E0227 19:39:09.405107 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" containerName="oc" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.405140 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" containerName="oc" Feb 27 19:39:09 crc kubenswrapper[4708]: E0227 19:39:09.405202 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="registry-server" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.405220 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="registry-server" Feb 27 19:39:09 crc kubenswrapper[4708]: E0227 19:39:09.405257 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="extract-content" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.405275 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="extract-content" Feb 27 19:39:09 crc kubenswrapper[4708]: E0227 19:39:09.405303 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="extract-utilities" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.405320 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="extract-utilities" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.405897 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="d63c566d-7b0f-4580-9d8f-17077155f4f4" containerName="oc" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.405941 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="7359d222-3d02-451f-a890-5870c88eb737" containerName="registry-server" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.418012 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.418306 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5c4xg"] Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.456461 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vjpv\" (UniqueName: \"kubernetes.io/projected/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-kube-api-access-9vjpv\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.456571 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-catalog-content\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.457026 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-utilities\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.559058 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-utilities\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.559178 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vjpv\" (UniqueName: \"kubernetes.io/projected/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-kube-api-access-9vjpv\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.559247 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-catalog-content\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.559594 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-utilities\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.559820 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-catalog-content\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.586897 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vjpv\" (UniqueName: \"kubernetes.io/projected/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-kube-api-access-9vjpv\") pod \"community-operators-5c4xg\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:09 crc kubenswrapper[4708]: I0227 19:39:09.742519 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:39:10 crc kubenswrapper[4708]: I0227 19:39:10.314668 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5c4xg"] Feb 27 19:39:10 crc kubenswrapper[4708]: I0227 19:39:10.562478 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c4xg" event={"ID":"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00","Type":"ContainerStarted","Data":"9d24d8a48498e45a81b41cac2a02a6158d83e2e880e53d580d9ffceb795e0dd8"} Feb 27 19:39:11 crc kubenswrapper[4708]: I0227 19:39:11.576601 4708 generic.go:334] "Generic (PLEG): container finished" podID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerID="30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6" exitCode=0 Feb 27 19:39:11 crc kubenswrapper[4708]: I0227 19:39:11.576705 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c4xg" event={"ID":"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00","Type":"ContainerDied","Data":"30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6"} Feb 27 19:39:12 crc kubenswrapper[4708]: E0227 19:39:12.155507 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:39:12 crc kubenswrapper[4708]: E0227 19:39:12.156030 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vjpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5c4xg_openshift-marketplace(5a2c095d-6d24-4cf3-9e2e-feb5094b5f00): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:12 crc kubenswrapper[4708]: E0227 19:39:12.157625 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" Feb 27 19:39:12 crc kubenswrapper[4708]: E0227 19:39:12.245579 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:39:12 crc kubenswrapper[4708]: E0227 19:39:12.593408 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" Feb 27 19:39:23 crc kubenswrapper[4708]: E0227 19:39:23.230639 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:39:26 crc kubenswrapper[4708]: I0227 19:39:26.719631 4708 generic.go:334] "Generic (PLEG): container finished" podID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerID="d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99" exitCode=0 Feb 27 19:39:26 crc kubenswrapper[4708]: I0227 19:39:26.719821 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" event={"ID":"07f53736-cb9c-4e3e-b732-5c089ac23985","Type":"ContainerDied","Data":"d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99"} Feb 27 19:39:26 crc kubenswrapper[4708]: I0227 19:39:26.720873 4708 scope.go:117] "RemoveContainer" containerID="d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99" Feb 27 19:39:27 crc kubenswrapper[4708]: I0227 19:39:27.389210 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b2jlw_must-gather-m8mj6_07f53736-cb9c-4e3e-b732-5c089ac23985/gather/0.log" Feb 27 19:39:27 crc kubenswrapper[4708]: E0227 19:39:27.767674 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:39:27 crc kubenswrapper[4708]: E0227 19:39:27.767860 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vjpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5c4xg_openshift-marketplace(5a2c095d-6d24-4cf3-9e2e-feb5094b5f00): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:27 crc kubenswrapper[4708]: E0227 19:39:27.769052 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.293209 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-4sgj6"] Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.297438 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.307656 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4sgj6"] Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.418987 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfj7l\" (UniqueName: \"kubernetes.io/projected/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-kube-api-access-wfj7l\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.419054 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-catalog-content\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.419129 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-utilities\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.521073 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfj7l\" (UniqueName: \"kubernetes.io/projected/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-kube-api-access-wfj7l\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.521155 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-catalog-content\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.521224 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-utilities\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.521762 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-catalog-content\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.521822 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-utilities\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.546975 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfj7l\" (UniqueName: \"kubernetes.io/projected/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-kube-api-access-wfj7l\") pod \"redhat-operators-4sgj6\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:30 crc kubenswrapper[4708]: I0227 19:39:30.633351 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:39:31 crc kubenswrapper[4708]: I0227 19:39:31.159390 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-4sgj6"] Feb 27 19:39:31 crc kubenswrapper[4708]: I0227 19:39:31.780720 4708 generic.go:334] "Generic (PLEG): container finished" podID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerID="1fcba0f5f53025c6f24907abfd841b67db3870d3ca97aae0c50864387efeebe8" exitCode=0 Feb 27 19:39:31 crc kubenswrapper[4708]: I0227 19:39:31.780778 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sgj6" event={"ID":"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c","Type":"ContainerDied","Data":"1fcba0f5f53025c6f24907abfd841b67db3870d3ca97aae0c50864387efeebe8"} Feb 27 19:39:31 crc kubenswrapper[4708]: I0227 19:39:31.780812 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sgj6" event={"ID":"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c","Type":"ContainerStarted","Data":"312cd497198c1750ad08c30706cee563932c4b59d9466dffc6eba1e90b4d7f83"} Feb 27 19:39:32 crc kubenswrapper[4708]: E0227 19:39:32.452685 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:39:32 crc kubenswrapper[4708]: E0227 19:39:32.452889 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfj7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4sgj6_openshift-marketplace(ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:32 crc kubenswrapper[4708]: E0227 19:39:32.454288 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-4sgj6" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" Feb 27 19:39:32 crc kubenswrapper[4708]: E0227 19:39:32.795941 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4sgj6" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" Feb 27 19:39:33 crc kubenswrapper[4708]: I0227 19:39:33.268180 4708 scope.go:117] "RemoveContainer" containerID="c2e6231a7d4930b3a3fbacbc9f55c4e3ce56fac474277d48c1fd50c890fb9469" Feb 27 19:39:33 crc kubenswrapper[4708]: I0227 19:39:33.290411 4708 scope.go:117] "RemoveContainer" containerID="275188da689f30ffec3a377e3392df65cb025c7443247e1f539478b694fd3fc5" Feb 27 19:39:35 crc kubenswrapper[4708]: I0227 19:39:35.919578 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-b2jlw/must-gather-m8mj6"] Feb 27 19:39:35 crc kubenswrapper[4708]: I0227 19:39:35.920096 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerName="copy" containerID="cri-o://071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b" gracePeriod=2 Feb 27 19:39:35 crc kubenswrapper[4708]: I0227 19:39:35.927941 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-b2jlw/must-gather-m8mj6"] Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.751632 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b2jlw_must-gather-m8mj6_07f53736-cb9c-4e3e-b732-5c089ac23985/copy/0.log" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.752367 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.831376 4708 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-b2jlw_must-gather-m8mj6_07f53736-cb9c-4e3e-b732-5c089ac23985/copy/0.log" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.831979 4708 generic.go:334] "Generic (PLEG): container finished" podID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerID="071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b" exitCode=143 Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.832035 4708 scope.go:117] "RemoveContainer" containerID="071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.832223 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-b2jlw/must-gather-m8mj6" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.854037 4708 scope.go:117] "RemoveContainer" containerID="d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.862997 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpmk2\" (UniqueName: \"kubernetes.io/projected/07f53736-cb9c-4e3e-b732-5c089ac23985-kube-api-access-xpmk2\") pod \"07f53736-cb9c-4e3e-b732-5c089ac23985\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.863309 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07f53736-cb9c-4e3e-b732-5c089ac23985-must-gather-output\") pod \"07f53736-cb9c-4e3e-b732-5c089ac23985\" (UID: \"07f53736-cb9c-4e3e-b732-5c089ac23985\") " Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.869995 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f53736-cb9c-4e3e-b732-5c089ac23985-kube-api-access-xpmk2" (OuterVolumeSpecName: "kube-api-access-xpmk2") pod "07f53736-cb9c-4e3e-b732-5c089ac23985" (UID: "07f53736-cb9c-4e3e-b732-5c089ac23985"). InnerVolumeSpecName "kube-api-access-xpmk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.900082 4708 scope.go:117] "RemoveContainer" containerID="071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b" Feb 27 19:39:36 crc kubenswrapper[4708]: E0227 19:39:36.900494 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b\": container with ID starting with 071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b not found: ID does not exist" containerID="071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.900524 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b"} err="failed to get container status \"071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b\": rpc error: code = NotFound desc = could not find container \"071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b\": container with ID starting with 071727889d8dc3921573feff682a8afa732857eac2a00e3e9c6a12f67207c76b not found: ID does not exist" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.900542 4708 scope.go:117] "RemoveContainer" containerID="d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99" Feb 27 19:39:36 crc kubenswrapper[4708]: E0227 19:39:36.900729 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99\": container with ID starting with d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99 not found: ID does not exist" containerID="d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.900748 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99"} err="failed to get container status \"d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99\": rpc error: code = NotFound desc = could not find container \"d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99\": container with ID starting with d1eca89e871f7689cd2816efd14734aef2a93fc88b662acee160e7bfcf293b99 not found: ID does not exist" Feb 27 19:39:36 crc kubenswrapper[4708]: I0227 19:39:36.965537 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpmk2\" (UniqueName: \"kubernetes.io/projected/07f53736-cb9c-4e3e-b732-5c089ac23985-kube-api-access-xpmk2\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:37 crc kubenswrapper[4708]: I0227 19:39:37.076901 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f53736-cb9c-4e3e-b732-5c089ac23985-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "07f53736-cb9c-4e3e-b732-5c089ac23985" (UID: "07f53736-cb9c-4e3e-b732-5c089ac23985"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:39:37 crc kubenswrapper[4708]: I0227 19:39:37.169820 4708 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/07f53736-cb9c-4e3e-b732-5c089ac23985-must-gather-output\") on node \"crc\" DevicePath \"\"" Feb 27 19:39:38 crc kubenswrapper[4708]: E0227 19:39:38.231397 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:39:38 crc kubenswrapper[4708]: I0227 19:39:38.256446 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" path="/var/lib/kubelet/pods/07f53736-cb9c-4e3e-b732-5c089ac23985/volumes" Feb 27 19:39:41 crc kubenswrapper[4708]: E0227 19:39:41.231176 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" Feb 27 19:39:45 crc kubenswrapper[4708]: E0227 19:39:45.086772 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Feb 27 19:39:45 crc kubenswrapper[4708]: E0227 19:39:45.087457 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wfj7l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-4sgj6_openshift-marketplace(ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:45 crc kubenswrapper[4708]: E0227 19:39:45.088670 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/redhat-operator-index@sha256=340dbaa786c584e5ffe05a0f79571b9c2fe7d16a1a1fb390e5d83b437d7a1ff3/signature-3: status 500 (Internal Server Error)\"" pod="openshift-marketplace/redhat-operators-4sgj6" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" Feb 27 19:39:51 crc kubenswrapper[4708]: E0227 19:39:51.230919 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:39:53 crc kubenswrapper[4708]: E0227 19:39:53.034931 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/community-operator-index:v4.18" Feb 27 19:39:53 crc kubenswrapper[4708]: E0227 19:39:53.035282 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vjpv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-5c4xg_openshift-marketplace(5a2c095d-6d24-4cf3-9e2e-feb5094b5f00): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:39:53 crc kubenswrapper[4708]: E0227 19:39:53.036968 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/community-operator-index@sha256=886ecdbcb5b8f90338063f6476072fab73c2a9a65b9f2b3835b7bd01c69794c1/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" Feb 27 19:39:58 crc kubenswrapper[4708]: E0227 19:39:58.231966 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-4sgj6" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.163419 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dck8b"] Feb 27 19:40:00 crc kubenswrapper[4708]: E0227 19:40:00.164201 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerName="copy" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.164218 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerName="copy" Feb 27 19:40:00 crc kubenswrapper[4708]: E0227 19:40:00.164240 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerName="gather" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.164247 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerName="gather" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.164530 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerName="gather" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.164551 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="07f53736-cb9c-4e3e-b732-5c089ac23985" containerName="copy" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.165392 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dck8b" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.182031 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dck8b"] Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.355234 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4zht\" (UniqueName: \"kubernetes.io/projected/8d840ad4-5ef0-4790-9c67-6f70e919103d-kube-api-access-c4zht\") pod \"auto-csr-approver-29537020-dck8b\" (UID: \"8d840ad4-5ef0-4790-9c67-6f70e919103d\") " pod="openshift-infra/auto-csr-approver-29537020-dck8b" Feb 27 19:40:00 crc kubenswrapper[4708]: I0227 19:40:00.457944 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4zht\" (UniqueName: \"kubernetes.io/projected/8d840ad4-5ef0-4790-9c67-6f70e919103d-kube-api-access-c4zht\") pod \"auto-csr-approver-29537020-dck8b\" (UID: \"8d840ad4-5ef0-4790-9c67-6f70e919103d\") " pod="openshift-infra/auto-csr-approver-29537020-dck8b" Feb 27 19:40:01 crc kubenswrapper[4708]: I0227 19:40:01.027358 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4zht\" (UniqueName: \"kubernetes.io/projected/8d840ad4-5ef0-4790-9c67-6f70e919103d-kube-api-access-c4zht\") pod \"auto-csr-approver-29537020-dck8b\" (UID: \"8d840ad4-5ef0-4790-9c67-6f70e919103d\") " pod="openshift-infra/auto-csr-approver-29537020-dck8b" Feb 27 19:40:01 crc kubenswrapper[4708]: I0227 19:40:01.089271 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dck8b" Feb 27 19:40:01 crc kubenswrapper[4708]: I0227 19:40:01.547828 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537020-dck8b"] Feb 27 19:40:01 crc kubenswrapper[4708]: W0227 19:40:01.558456 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8d840ad4_5ef0_4790_9c67_6f70e919103d.slice/crio-d7fed68b367b9ef6fbcba1269addd83ea80be9597258c7e66ab9d82244c444ad WatchSource:0}: Error finding container d7fed68b367b9ef6fbcba1269addd83ea80be9597258c7e66ab9d82244c444ad: Status 404 returned error can't find the container with id d7fed68b367b9ef6fbcba1269addd83ea80be9597258c7e66ab9d82244c444ad Feb 27 19:40:02 crc kubenswrapper[4708]: I0227 19:40:02.085051 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-dck8b" event={"ID":"8d840ad4-5ef0-4790-9c67-6f70e919103d","Type":"ContainerStarted","Data":"d7fed68b367b9ef6fbcba1269addd83ea80be9597258c7e66ab9d82244c444ad"} Feb 27 19:40:02 crc kubenswrapper[4708]: E0227 19:40:02.984936 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:40:02 crc kubenswrapper[4708]: E0227 19:40:02.985326 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:40:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:40:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-c4zht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537020-dck8b_openshift-infra(8d840ad4-5ef0-4790-9c67-6f70e919103d): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:40:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:40:02 crc kubenswrapper[4708]: E0227 19:40:02.986488 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537020-dck8b" podUID="8d840ad4-5ef0-4790-9c67-6f70e919103d" Feb 27 19:40:03 crc kubenswrapper[4708]: E0227 19:40:03.098164 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537020-dck8b" podUID="8d840ad4-5ef0-4790-9c67-6f70e919103d" Feb 27 19:40:06 crc kubenswrapper[4708]: E0227 19:40:06.231757 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:40:07 crc kubenswrapper[4708]: E0227 19:40:07.231442 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" Feb 27 19:40:12 crc kubenswrapper[4708]: I0227 19:40:12.186791 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sgj6" event={"ID":"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c","Type":"ContainerStarted","Data":"5f21dc0ec6bfadb58be7aa84100e3d2e10ff3263a47ae911f4731d5ecfd2424d"} Feb 27 19:40:18 crc kubenswrapper[4708]: E0227 19:40:18.230330 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537012-99k6z" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" Feb 27 19:40:18 crc kubenswrapper[4708]: I0227 19:40:18.270045 4708 generic.go:334] "Generic (PLEG): container finished" podID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerID="5f21dc0ec6bfadb58be7aa84100e3d2e10ff3263a47ae911f4731d5ecfd2424d" exitCode=0 Feb 27 19:40:18 crc kubenswrapper[4708]: I0227 19:40:18.270095 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sgj6" event={"ID":"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c","Type":"ContainerDied","Data":"5f21dc0ec6bfadb58be7aa84100e3d2e10ff3263a47ae911f4731d5ecfd2424d"} Feb 27 19:40:19 crc kubenswrapper[4708]: I0227 19:40:19.287623 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sgj6" event={"ID":"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c","Type":"ContainerStarted","Data":"083aa5366621fd80749b5ea9b563ae4ba713249b2a747b686c7e5a01b66a98fb"} Feb 27 19:40:19 crc kubenswrapper[4708]: I0227 19:40:19.314521 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-4sgj6" podStartSLOduration=2.42679743 podStartE2EDuration="49.314505925s" podCreationTimestamp="2026-02-27 19:39:30 +0000 UTC" firstStartedPulling="2026-02-27 19:39:31.784446265 +0000 UTC m=+9970.300243852" lastFinishedPulling="2026-02-27 19:40:18.67215475 +0000 UTC m=+10017.187952347" observedRunningTime="2026-02-27 19:40:19.31432762 +0000 UTC m=+10017.830125197" watchObservedRunningTime="2026-02-27 19:40:19.314505925 +0000 UTC m=+10017.830303512" Feb 27 19:40:20 crc kubenswrapper[4708]: E0227 19:40:20.230488 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" Feb 27 19:40:20 crc kubenswrapper[4708]: I0227 19:40:20.302683 4708 generic.go:334] "Generic (PLEG): container finished" podID="8d840ad4-5ef0-4790-9c67-6f70e919103d" containerID="5dcb089b40d01cd86a85e61cac9ae27b3054bbbc32279a6b267fb78b5cf33756" exitCode=0 Feb 27 19:40:20 crc kubenswrapper[4708]: I0227 19:40:20.302725 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-dck8b" event={"ID":"8d840ad4-5ef0-4790-9c67-6f70e919103d","Type":"ContainerDied","Data":"5dcb089b40d01cd86a85e61cac9ae27b3054bbbc32279a6b267fb78b5cf33756"} Feb 27 19:40:20 crc kubenswrapper[4708]: I0227 19:40:20.634214 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:40:20 crc kubenswrapper[4708]: I0227 19:40:20.635069 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:40:21 crc kubenswrapper[4708]: I0227 19:40:21.681720 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-4sgj6" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="registry-server" probeResult="failure" output=< Feb 27 19:40:21 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:40:21 crc kubenswrapper[4708]: > Feb 27 19:40:22 crc kubenswrapper[4708]: I0227 19:40:22.256457 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dck8b" Feb 27 19:40:22 crc kubenswrapper[4708]: I0227 19:40:22.325995 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537020-dck8b" event={"ID":"8d840ad4-5ef0-4790-9c67-6f70e919103d","Type":"ContainerDied","Data":"d7fed68b367b9ef6fbcba1269addd83ea80be9597258c7e66ab9d82244c444ad"} Feb 27 19:40:22 crc kubenswrapper[4708]: I0227 19:40:22.326306 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7fed68b367b9ef6fbcba1269addd83ea80be9597258c7e66ab9d82244c444ad" Feb 27 19:40:22 crc kubenswrapper[4708]: I0227 19:40:22.326053 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537020-dck8b" Feb 27 19:40:22 crc kubenswrapper[4708]: I0227 19:40:22.427153 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c4zht\" (UniqueName: \"kubernetes.io/projected/8d840ad4-5ef0-4790-9c67-6f70e919103d-kube-api-access-c4zht\") pod \"8d840ad4-5ef0-4790-9c67-6f70e919103d\" (UID: \"8d840ad4-5ef0-4790-9c67-6f70e919103d\") " Feb 27 19:40:22 crc kubenswrapper[4708]: I0227 19:40:22.433969 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d840ad4-5ef0-4790-9c67-6f70e919103d-kube-api-access-c4zht" (OuterVolumeSpecName: "kube-api-access-c4zht") pod "8d840ad4-5ef0-4790-9c67-6f70e919103d" (UID: "8d840ad4-5ef0-4790-9c67-6f70e919103d"). InnerVolumeSpecName "kube-api-access-c4zht". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:40:22 crc kubenswrapper[4708]: I0227 19:40:22.532000 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c4zht\" (UniqueName: \"kubernetes.io/projected/8d840ad4-5ef0-4790-9c67-6f70e919103d-kube-api-access-c4zht\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:23 crc kubenswrapper[4708]: I0227 19:40:23.324419 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-9285m"] Feb 27 19:40:23 crc kubenswrapper[4708]: I0227 19:40:23.337010 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537014-9285m"] Feb 27 19:40:24 crc kubenswrapper[4708]: I0227 19:40:24.242387 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b0ad072-9864-40a7-abdb-c3ac8b7255a0" path="/var/lib/kubelet/pods/2b0ad072-9864-40a7-abdb-c3ac8b7255a0/volumes" Feb 27 19:40:30 crc kubenswrapper[4708]: I0227 19:40:30.680209 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:40:30 crc kubenswrapper[4708]: I0227 19:40:30.731515 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:40:31 crc kubenswrapper[4708]: I0227 19:40:31.516358 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4sgj6"] Feb 27 19:40:32 crc kubenswrapper[4708]: I0227 19:40:32.422112 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-4sgj6" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="registry-server" containerID="cri-o://083aa5366621fd80749b5ea9b563ae4ba713249b2a747b686c7e5a01b66a98fb" gracePeriod=2 Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.407371 4708 scope.go:117] "RemoveContainer" containerID="fdf975746d4bf490a18cf765b2c64405efc37ae8c57746113d99ca2ddf623d9f" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.459874 4708 generic.go:334] "Generic (PLEG): container finished" podID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerID="083aa5366621fd80749b5ea9b563ae4ba713249b2a747b686c7e5a01b66a98fb" exitCode=0 Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.459883 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sgj6" event={"ID":"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c","Type":"ContainerDied","Data":"083aa5366621fd80749b5ea9b563ae4ba713249b2a747b686c7e5a01b66a98fb"} Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.459979 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-4sgj6" event={"ID":"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c","Type":"ContainerDied","Data":"312cd497198c1750ad08c30706cee563932c4b59d9466dffc6eba1e90b4d7f83"} Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.460001 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="312cd497198c1750ad08c30706cee563932c4b59d9466dffc6eba1e90b4d7f83" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.519461 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.694727 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-catalog-content\") pod \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.694890 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wfj7l\" (UniqueName: \"kubernetes.io/projected/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-kube-api-access-wfj7l\") pod \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.695116 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-utilities\") pod \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\" (UID: \"ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c\") " Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.695643 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-utilities" (OuterVolumeSpecName: "utilities") pod "ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" (UID: "ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.700697 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-kube-api-access-wfj7l" (OuterVolumeSpecName: "kube-api-access-wfj7l") pod "ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" (UID: "ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c"). InnerVolumeSpecName "kube-api-access-wfj7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.798458 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wfj7l\" (UniqueName: \"kubernetes.io/projected/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-kube-api-access-wfj7l\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.798489 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.854480 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" (UID: "ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:40:33 crc kubenswrapper[4708]: I0227 19:40:33.900553 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:34 crc kubenswrapper[4708]: I0227 19:40:34.470719 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-4sgj6" Feb 27 19:40:34 crc kubenswrapper[4708]: I0227 19:40:34.514679 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-4sgj6"] Feb 27 19:40:34 crc kubenswrapper[4708]: I0227 19:40:34.530997 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-4sgj6"] Feb 27 19:40:35 crc kubenswrapper[4708]: I0227 19:40:35.481271 4708 generic.go:334] "Generic (PLEG): container finished" podID="440d9f6e-2360-49dc-bf60-0a544c990079" containerID="387f5d5317d04f06522edbe26330d9d6fa18019ddb21c6f793f4f81697a5f4bf" exitCode=0 Feb 27 19:40:35 crc kubenswrapper[4708]: I0227 19:40:35.481375 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537012-99k6z" event={"ID":"440d9f6e-2360-49dc-bf60-0a544c990079","Type":"ContainerDied","Data":"387f5d5317d04f06522edbe26330d9d6fa18019ddb21c6f793f4f81697a5f4bf"} Feb 27 19:40:35 crc kubenswrapper[4708]: I0227 19:40:35.632272 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:40:35 crc kubenswrapper[4708]: I0227 19:40:35.632331 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:40:36 crc kubenswrapper[4708]: I0227 19:40:36.241629 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" path="/var/lib/kubelet/pods/ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c/volumes" Feb 27 19:40:36 crc kubenswrapper[4708]: I0227 19:40:36.513189 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c4xg" event={"ID":"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00","Type":"ContainerStarted","Data":"a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568"} Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.219128 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-99k6z" Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.370438 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhr6b\" (UniqueName: \"kubernetes.io/projected/440d9f6e-2360-49dc-bf60-0a544c990079-kube-api-access-rhr6b\") pod \"440d9f6e-2360-49dc-bf60-0a544c990079\" (UID: \"440d9f6e-2360-49dc-bf60-0a544c990079\") " Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.378611 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440d9f6e-2360-49dc-bf60-0a544c990079-kube-api-access-rhr6b" (OuterVolumeSpecName: "kube-api-access-rhr6b") pod "440d9f6e-2360-49dc-bf60-0a544c990079" (UID: "440d9f6e-2360-49dc-bf60-0a544c990079"). InnerVolumeSpecName "kube-api-access-rhr6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.474577 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rhr6b\" (UniqueName: \"kubernetes.io/projected/440d9f6e-2360-49dc-bf60-0a544c990079-kube-api-access-rhr6b\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.525072 4708 generic.go:334] "Generic (PLEG): container finished" podID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerID="a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568" exitCode=0 Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.525157 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c4xg" event={"ID":"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00","Type":"ContainerDied","Data":"a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568"} Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.529747 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537012-99k6z" event={"ID":"440d9f6e-2360-49dc-bf60-0a544c990079","Type":"ContainerDied","Data":"e183e8127081c4a3c9c7654a0f20b6a281a3a9244c9319dd1fd4249f50c6ac82"} Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.529783 4708 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e183e8127081c4a3c9c7654a0f20b6a281a3a9244c9319dd1fd4249f50c6ac82" Feb 27 19:40:37 crc kubenswrapper[4708]: I0227 19:40:37.530335 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537012-99k6z" Feb 27 19:40:38 crc kubenswrapper[4708]: I0227 19:40:38.277392 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-99k6z"] Feb 27 19:40:38 crc kubenswrapper[4708]: I0227 19:40:38.291712 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29537012-99k6z"] Feb 27 19:40:38 crc kubenswrapper[4708]: I0227 19:40:38.544907 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c4xg" event={"ID":"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00","Type":"ContainerStarted","Data":"fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09"} Feb 27 19:40:38 crc kubenswrapper[4708]: I0227 19:40:38.572843 4708 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5c4xg" podStartSLOduration=3.121174633 podStartE2EDuration="1m29.572825782s" podCreationTimestamp="2026-02-27 19:39:09 +0000 UTC" firstStartedPulling="2026-02-27 19:39:11.579267357 +0000 UTC m=+9950.095064984" lastFinishedPulling="2026-02-27 19:40:38.030918546 +0000 UTC m=+10036.546716133" observedRunningTime="2026-02-27 19:40:38.564102085 +0000 UTC m=+10037.079899692" watchObservedRunningTime="2026-02-27 19:40:38.572825782 +0000 UTC m=+10037.088623369" Feb 27 19:40:39 crc kubenswrapper[4708]: I0227 19:40:39.742972 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:40:39 crc kubenswrapper[4708]: I0227 19:40:39.743265 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:40:40 crc kubenswrapper[4708]: I0227 19:40:40.240474 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" path="/var/lib/kubelet/pods/440d9f6e-2360-49dc-bf60-0a544c990079/volumes" Feb 27 19:40:40 crc kubenswrapper[4708]: I0227 19:40:40.790167 4708 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="registry-server" probeResult="failure" output=< Feb 27 19:40:40 crc kubenswrapper[4708]: timeout: failed to connect service ":50051" within 1s Feb 27 19:40:40 crc kubenswrapper[4708]: > Feb 27 19:40:49 crc kubenswrapper[4708]: I0227 19:40:49.813606 4708 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:40:49 crc kubenswrapper[4708]: I0227 19:40:49.872242 4708 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:40:50 crc kubenswrapper[4708]: I0227 19:40:50.057062 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5c4xg"] Feb 27 19:40:51 crc kubenswrapper[4708]: I0227 19:40:51.669304 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5c4xg" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="registry-server" containerID="cri-o://fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09" gracePeriod=2 Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.413323 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.511401 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vjpv\" (UniqueName: \"kubernetes.io/projected/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-kube-api-access-9vjpv\") pod \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.511741 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-utilities\") pod \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.511973 4708 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-catalog-content\") pod \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\" (UID: \"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00\") " Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.512596 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-utilities" (OuterVolumeSpecName: "utilities") pod "5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" (UID: "5a2c095d-6d24-4cf3-9e2e-feb5094b5f00"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.517467 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-kube-api-access-9vjpv" (OuterVolumeSpecName: "kube-api-access-9vjpv") pod "5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" (UID: "5a2c095d-6d24-4cf3-9e2e-feb5094b5f00"). InnerVolumeSpecName "kube-api-access-9vjpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.587049 4708 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" (UID: "5a2c095d-6d24-4cf3-9e2e-feb5094b5f00"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.614545 4708 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-catalog-content\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.614603 4708 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9vjpv\" (UniqueName: \"kubernetes.io/projected/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-kube-api-access-9vjpv\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.614620 4708 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00-utilities\") on node \"crc\" DevicePath \"\"" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.683831 4708 generic.go:334] "Generic (PLEG): container finished" podID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerID="fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09" exitCode=0 Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.683884 4708 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5c4xg" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.683896 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c4xg" event={"ID":"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00","Type":"ContainerDied","Data":"fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09"} Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.683951 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5c4xg" event={"ID":"5a2c095d-6d24-4cf3-9e2e-feb5094b5f00","Type":"ContainerDied","Data":"9d24d8a48498e45a81b41cac2a02a6158d83e2e880e53d580d9ffceb795e0dd8"} Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.683969 4708 scope.go:117] "RemoveContainer" containerID="fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.711536 4708 scope.go:117] "RemoveContainer" containerID="a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.733206 4708 scope.go:117] "RemoveContainer" containerID="30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.733678 4708 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5c4xg"] Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.749218 4708 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5c4xg"] Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.788561 4708 scope.go:117] "RemoveContainer" containerID="fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09" Feb 27 19:40:52 crc kubenswrapper[4708]: E0227 19:40:52.789011 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09\": container with ID starting with fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09 not found: ID does not exist" containerID="fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.789052 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09"} err="failed to get container status \"fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09\": rpc error: code = NotFound desc = could not find container \"fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09\": container with ID starting with fdc82f580c37524c8ad19e86c1d81024faac72a619f953409c2863ba4c8c4b09 not found: ID does not exist" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.789076 4708 scope.go:117] "RemoveContainer" containerID="a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568" Feb 27 19:40:52 crc kubenswrapper[4708]: E0227 19:40:52.789570 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568\": container with ID starting with a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568 not found: ID does not exist" containerID="a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.789623 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568"} err="failed to get container status \"a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568\": rpc error: code = NotFound desc = could not find container \"a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568\": container with ID starting with a2836bafe7bcc22442df95076684f47406d9f82bb3bc5700e6b4a0fec9d5e568 not found: ID does not exist" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.789656 4708 scope.go:117] "RemoveContainer" containerID="30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6" Feb 27 19:40:52 crc kubenswrapper[4708]: E0227 19:40:52.790088 4708 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6\": container with ID starting with 30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6 not found: ID does not exist" containerID="30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6" Feb 27 19:40:52 crc kubenswrapper[4708]: I0227 19:40:52.790122 4708 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6"} err="failed to get container status \"30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6\": rpc error: code = NotFound desc = could not find container \"30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6\": container with ID starting with 30248f83d02db76ae59c93ab7a87f7184a121b9430f1a2d5b23c921948bcfad6 not found: ID does not exist" Feb 27 19:40:54 crc kubenswrapper[4708]: I0227 19:40:54.239368 4708 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" path="/var/lib/kubelet/pods/5a2c095d-6d24-4cf3-9e2e-feb5094b5f00/volumes" Feb 27 19:41:05 crc kubenswrapper[4708]: I0227 19:41:05.634461 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:41:05 crc kubenswrapper[4708]: I0227 19:41:05.635085 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:41:35 crc kubenswrapper[4708]: I0227 19:41:35.631575 4708 patch_prober.go:28] interesting pod/machine-config-daemon-kvxg2 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Feb 27 19:41:35 crc kubenswrapper[4708]: I0227 19:41:35.634061 4708 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Feb 27 19:41:35 crc kubenswrapper[4708]: I0227 19:41:35.634125 4708 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" Feb 27 19:41:35 crc kubenswrapper[4708]: I0227 19:41:35.635304 4708 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96"} pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Feb 27 19:41:35 crc kubenswrapper[4708]: I0227 19:41:35.635356 4708 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerName="machine-config-daemon" containerID="cri-o://6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" gracePeriod=600 Feb 27 19:41:35 crc kubenswrapper[4708]: E0227 19:41:35.957098 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:41:36 crc kubenswrapper[4708]: I0227 19:41:36.104408 4708 generic.go:334] "Generic (PLEG): container finished" podID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" exitCode=0 Feb 27 19:41:36 crc kubenswrapper[4708]: I0227 19:41:36.104469 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" event={"ID":"ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0","Type":"ContainerDied","Data":"6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96"} Feb 27 19:41:36 crc kubenswrapper[4708]: I0227 19:41:36.104505 4708 scope.go:117] "RemoveContainer" containerID="081b1c80d435493dc455ba22a5f8780a28b8b1cb9921b9014300ff6f29437e96" Feb 27 19:41:36 crc kubenswrapper[4708]: I0227 19:41:36.105376 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:41:36 crc kubenswrapper[4708]: E0227 19:41:36.105692 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:41:51 crc kubenswrapper[4708]: I0227 19:41:51.228961 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:41:51 crc kubenswrapper[4708]: E0227 19:41:51.229811 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.148789 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537022-w5nd2"] Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151118 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="extract-content" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151134 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="extract-content" Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151160 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="registry-server" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151166 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="registry-server" Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151187 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="extract-utilities" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151195 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="extract-utilities" Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151209 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="extract-utilities" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151217 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="extract-utilities" Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151229 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d840ad4-5ef0-4790-9c67-6f70e919103d" containerName="oc" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151237 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d840ad4-5ef0-4790-9c67-6f70e919103d" containerName="oc" Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151251 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" containerName="oc" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151258 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" containerName="oc" Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151274 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="registry-server" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151282 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="registry-server" Feb 27 19:42:00 crc kubenswrapper[4708]: E0227 19:42:00.151300 4708 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="extract-content" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151307 4708 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="extract-content" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151522 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a2c095d-6d24-4cf3-9e2e-feb5094b5f00" containerName="registry-server" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151545 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca0acf1e-1d2b-4e1a-9b3e-f8eeae0cad1c" containerName="registry-server" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151563 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d840ad4-5ef0-4790-9c67-6f70e919103d" containerName="oc" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.151575 4708 memory_manager.go:354] "RemoveStaleState removing state" podUID="440d9f6e-2360-49dc-bf60-0a544c990079" containerName="oc" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.152621 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.162753 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"openshift-service-ca.crt" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.163132 4708 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-infra"/"kube-root-ca.crt" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.163303 4708 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-infra"/"csr-approver-sa-dockercfg-p9cx5" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.167011 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-w5nd2"] Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.192746 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7d2gs\" (UniqueName: \"kubernetes.io/projected/475b0d37-7a4b-42c9-bd95-721e346c8ea2-kube-api-access-7d2gs\") pod \"auto-csr-approver-29537022-w5nd2\" (UID: \"475b0d37-7a4b-42c9-bd95-721e346c8ea2\") " pod="openshift-infra/auto-csr-approver-29537022-w5nd2" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.295074 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7d2gs\" (UniqueName: \"kubernetes.io/projected/475b0d37-7a4b-42c9-bd95-721e346c8ea2-kube-api-access-7d2gs\") pod \"auto-csr-approver-29537022-w5nd2\" (UID: \"475b0d37-7a4b-42c9-bd95-721e346c8ea2\") " pod="openshift-infra/auto-csr-approver-29537022-w5nd2" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.320488 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7d2gs\" (UniqueName: \"kubernetes.io/projected/475b0d37-7a4b-42c9-bd95-721e346c8ea2-kube-api-access-7d2gs\") pod \"auto-csr-approver-29537022-w5nd2\" (UID: \"475b0d37-7a4b-42c9-bd95-721e346c8ea2\") " pod="openshift-infra/auto-csr-approver-29537022-w5nd2" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.500195 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" Feb 27 19:42:00 crc kubenswrapper[4708]: I0227 19:42:00.963667 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537022-w5nd2"] Feb 27 19:42:01 crc kubenswrapper[4708]: I0227 19:42:01.368047 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" event={"ID":"475b0d37-7a4b-42c9-bd95-721e346c8ea2","Type":"ContainerStarted","Data":"453a66e9405ec26ec851c026d7bc4aaddf24e2d4ec0803f0674ac20f00160ddd"} Feb 27 19:42:02 crc kubenswrapper[4708]: E0227 19:42:02.120918 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:02 crc kubenswrapper[4708]: E0227 19:42:02.121383 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:02 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:02 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7d2gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-w5nd2_openshift-infra(475b0d37-7a4b-42c9-bd95-721e346c8ea2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:02 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:42:02 crc kubenswrapper[4708]: E0227 19:42:02.127178 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:42:02 crc kubenswrapper[4708]: E0227 19:42:02.381312 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:42:06 crc kubenswrapper[4708]: I0227 19:42:06.228687 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:42:06 crc kubenswrapper[4708]: E0227 19:42:06.229384 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:42:15 crc kubenswrapper[4708]: E0227 19:42:15.458563 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:15 crc kubenswrapper[4708]: E0227 19:42:15.459979 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:15 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:15 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7d2gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-w5nd2_openshift-infra(475b0d37-7a4b-42c9-bd95-721e346c8ea2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:15 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:42:15 crc kubenswrapper[4708]: E0227 19:42:15.461052 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:42:19 crc kubenswrapper[4708]: I0227 19:42:19.228718 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:42:19 crc kubenswrapper[4708]: E0227 19:42:19.229493 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:42:29 crc kubenswrapper[4708]: E0227 19:42:29.232755 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:42:33 crc kubenswrapper[4708]: I0227 19:42:33.229544 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:42:33 crc kubenswrapper[4708]: E0227 19:42:33.230390 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:42:40 crc kubenswrapper[4708]: I0227 19:42:40.230742 4708 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Feb 27 19:42:41 crc kubenswrapper[4708]: E0227 19:42:41.205030 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:42:41 crc kubenswrapper[4708]: E0227 19:42:41.205443 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:42:41 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:42:41 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7d2gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-w5nd2_openshift-infra(475b0d37-7a4b-42c9-bd95-721e346c8ea2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:42:41 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:42:41 crc kubenswrapper[4708]: E0227 19:42:41.206613 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:42:45 crc kubenswrapper[4708]: I0227 19:42:45.229049 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:42:45 crc kubenswrapper[4708]: E0227 19:42:45.229866 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:42:53 crc kubenswrapper[4708]: E0227 19:42:53.230957 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:42:57 crc kubenswrapper[4708]: I0227 19:42:57.228864 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:42:57 crc kubenswrapper[4708]: E0227 19:42:57.229532 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.549661 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pff7t"] Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.552382 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.563289 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pff7t"] Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.601418 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-utilities\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.601540 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-catalog-content\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.601626 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xzkx\" (UniqueName: \"kubernetes.io/projected/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-kube-api-access-2xzkx\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.703184 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-utilities\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.703514 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-catalog-content\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.703624 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2xzkx\" (UniqueName: \"kubernetes.io/projected/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-kube-api-access-2xzkx\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.703814 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-utilities\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.704024 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-catalog-content\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.725703 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2xzkx\" (UniqueName: \"kubernetes.io/projected/f6cb4e67-ca0c-4712-bd88-2b5dedea9180-kube-api-access-2xzkx\") pod \"certified-operators-pff7t\" (UID: \"f6cb4e67-ca0c-4712-bd88-2b5dedea9180\") " pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:58 crc kubenswrapper[4708]: I0227 19:42:58.878136 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pff7t" Feb 27 19:42:59 crc kubenswrapper[4708]: I0227 19:42:59.549581 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pff7t"] Feb 27 19:42:59 crc kubenswrapper[4708]: W0227 19:42:59.550419 4708 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6cb4e67_ca0c_4712_bd88_2b5dedea9180.slice/crio-9c66ec6418254c49c3b0b45ea14e30c78d54cb4838cb1b0197ab18d015cede97 WatchSource:0}: Error finding container 9c66ec6418254c49c3b0b45ea14e30c78d54cb4838cb1b0197ab18d015cede97: Status 404 returned error can't find the container with id 9c66ec6418254c49c3b0b45ea14e30c78d54cb4838cb1b0197ab18d015cede97 Feb 27 19:42:59 crc kubenswrapper[4708]: I0227 19:42:59.964410 4708 generic.go:334] "Generic (PLEG): container finished" podID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" containerID="3399cd11d10b4edf9be966806027ad356df030c40db770214290f51e833d3a95" exitCode=0 Feb 27 19:42:59 crc kubenswrapper[4708]: I0227 19:42:59.964508 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pff7t" event={"ID":"f6cb4e67-ca0c-4712-bd88-2b5dedea9180","Type":"ContainerDied","Data":"3399cd11d10b4edf9be966806027ad356df030c40db770214290f51e833d3a95"} Feb 27 19:42:59 crc kubenswrapper[4708]: I0227 19:42:59.964718 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pff7t" event={"ID":"f6cb4e67-ca0c-4712-bd88-2b5dedea9180","Type":"ContainerStarted","Data":"9c66ec6418254c49c3b0b45ea14e30c78d54cb4838cb1b0197ab18d015cede97"} Feb 27 19:43:00 crc kubenswrapper[4708]: E0227 19:43:00.623016 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:43:00 crc kubenswrapper[4708]: E0227 19:43:00.623222 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pff7t_openshift-marketplace(f6cb4e67-ca0c-4712-bd88-2b5dedea9180): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:00 crc kubenswrapper[4708]: E0227 19:43:00.624489 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:43:00 crc kubenswrapper[4708]: E0227 19:43:00.975940 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:43:04 crc kubenswrapper[4708]: E0227 19:43:04.231065 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:43:10 crc kubenswrapper[4708]: I0227 19:43:10.228247 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:43:10 crc kubenswrapper[4708]: E0227 19:43:10.229050 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:43:15 crc kubenswrapper[4708]: E0227 19:43:15.231212 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:43:15 crc kubenswrapper[4708]: E0227 19:43:15.793593 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:43:15 crc kubenswrapper[4708]: E0227 19:43:15.794077 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pff7t_openshift-marketplace(f6cb4e67-ca0c-4712-bd88-2b5dedea9180): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:15 crc kubenswrapper[4708]: E0227 19:43:15.795802 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:43:25 crc kubenswrapper[4708]: I0227 19:43:25.229288 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:43:25 crc kubenswrapper[4708]: E0227 19:43:25.231174 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:43:28 crc kubenswrapper[4708]: E0227 19:43:28.230969 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:43:29 crc kubenswrapper[4708]: E0227 19:43:29.138018 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:43:29 crc kubenswrapper[4708]: E0227 19:43:29.138151 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:43:29 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:43:29 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7d2gs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537022-w5nd2_openshift-infra(475b0d37-7a4b-42c9-bd95-721e346c8ea2): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:43:29 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:43:29 crc kubenswrapper[4708]: E0227 19:43:29.139338 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:43:39 crc kubenswrapper[4708]: I0227 19:43:39.228423 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:43:39 crc kubenswrapper[4708]: E0227 19:43:39.229191 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:43:40 crc kubenswrapper[4708]: E0227 19:43:40.838896 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Feb 27 19:43:40 crc kubenswrapper[4708]: E0227 19:43:40.839173 4708 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2xzkx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-pff7t_openshift-marketplace(f6cb4e67-ca0c-4712-bd88-2b5dedea9180): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)" logger="UnhandledError" Feb 27 19:43:40 crc kubenswrapper[4708]: E0227 19:43:40.840454 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/redhat/certified-operator-index@sha256=625372062485d8ed1e4e84c388a7d036cb39c1b93d8c56dd3418fce0c028b62b/signature-2: status 500 (Internal Server Error)\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:43:43 crc kubenswrapper[4708]: E0227 19:43:43.230320 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:43:51 crc kubenswrapper[4708]: E0227 19:43:51.230361 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:43:52 crc kubenswrapper[4708]: I0227 19:43:52.256052 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:43:52 crc kubenswrapper[4708]: E0227 19:43:52.256793 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:43:55 crc kubenswrapper[4708]: E0227 19:43:55.231970 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.149561 4708 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29537024-84fb4"] Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.151768 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-84fb4" Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.169945 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-84fb4"] Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.244472 4708 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s74pp\" (UniqueName: \"kubernetes.io/projected/566d05c2-06e6-4418-b3ff-53d50f958eff-kube-api-access-s74pp\") pod \"auto-csr-approver-29537024-84fb4\" (UID: \"566d05c2-06e6-4418-b3ff-53d50f958eff\") " pod="openshift-infra/auto-csr-approver-29537024-84fb4" Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.346104 4708 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s74pp\" (UniqueName: \"kubernetes.io/projected/566d05c2-06e6-4418-b3ff-53d50f958eff-kube-api-access-s74pp\") pod \"auto-csr-approver-29537024-84fb4\" (UID: \"566d05c2-06e6-4418-b3ff-53d50f958eff\") " pod="openshift-infra/auto-csr-approver-29537024-84fb4" Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.366764 4708 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s74pp\" (UniqueName: \"kubernetes.io/projected/566d05c2-06e6-4418-b3ff-53d50f958eff-kube-api-access-s74pp\") pod \"auto-csr-approver-29537024-84fb4\" (UID: \"566d05c2-06e6-4418-b3ff-53d50f958eff\") " pod="openshift-infra/auto-csr-approver-29537024-84fb4" Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.476442 4708 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29537024-84fb4" Feb 27 19:44:00 crc kubenswrapper[4708]: I0227 19:44:00.998498 4708 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29537024-84fb4"] Feb 27 19:44:01 crc kubenswrapper[4708]: I0227 19:44:01.583456 4708 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29537024-84fb4" event={"ID":"566d05c2-06e6-4418-b3ff-53d50f958eff","Type":"ContainerStarted","Data":"e239b0c070e7cb0dcb1487ffe5f829b158919a99f0f1a98651e14cbadf19475b"} Feb 27 19:44:01 crc kubenswrapper[4708]: E0227 19:44:01.911570 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:44:01 crc kubenswrapper[4708]: E0227 19:44:01.911695 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:44:01 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:44:01 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s74pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537024-84fb4_openshift-infra(566d05c2-06e6-4418-b3ff-53d50f958eff): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:44:01 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:44:01 crc kubenswrapper[4708]: E0227 19:44:01.912913 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537024-84fb4" podUID="566d05c2-06e6-4418-b3ff-53d50f958eff" Feb 27 19:44:02 crc kubenswrapper[4708]: E0227 19:44:02.596516 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537024-84fb4" podUID="566d05c2-06e6-4418-b3ff-53d50f958eff" Feb 27 19:44:04 crc kubenswrapper[4708]: I0227 19:44:04.230636 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:44:04 crc kubenswrapper[4708]: E0227 19:44:04.231111 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:44:04 crc kubenswrapper[4708]: E0227 19:44:04.231404 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:44:06 crc kubenswrapper[4708]: E0227 19:44:06.230080 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:44:17 crc kubenswrapper[4708]: E0227 19:44:17.193389 4708 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)" image="registry.redhat.io/openshift4/ose-cli:latest" Feb 27 19:44:17 crc kubenswrapper[4708]: E0227 19:44:17.193899 4708 kuberuntime_manager.go:1274] "Unhandled Error" err=< Feb 27 19:44:17 crc kubenswrapper[4708]: container &Container{Name:oc,Image:registry.redhat.io/openshift4/ose-cli:latest,Command:[/bin/bash -c oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Feb 27 19:44:17 crc kubenswrapper[4708]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-s74pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod auto-csr-approver-29537024-84fb4_openshift-infra(566d05c2-06e6-4418-b3ff-53d50f958eff): ErrImagePull: copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error) Feb 27 19:44:17 crc kubenswrapper[4708]: > logger="UnhandledError" Feb 27 19:44:17 crc kubenswrapper[4708]: E0227 19:44:17.195029 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ErrImagePull: \"copying system image from manifest list: reading signatures: reading signature from https://registry.redhat.io/containers/sigstore/openshift4/ose-cli@sha256=69762925e16053d77685ff3a08b3b45dd2bfa5d68277851bc6969b368bbd0cb9/signature-7: status 500 (Internal Server Error)\"" pod="openshift-infra/auto-csr-approver-29537024-84fb4" podUID="566d05c2-06e6-4418-b3ff-53d50f958eff" Feb 27 19:44:18 crc kubenswrapper[4708]: E0227 19:44:18.231357 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2" Feb 27 19:44:19 crc kubenswrapper[4708]: I0227 19:44:19.229903 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:44:19 crc kubenswrapper[4708]: E0227 19:44:19.230314 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-pff7t" podUID="f6cb4e67-ca0c-4712-bd88-2b5dedea9180" Feb 27 19:44:19 crc kubenswrapper[4708]: E0227 19:44:19.230432 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:44:31 crc kubenswrapper[4708]: I0227 19:44:31.228908 4708 scope.go:117] "RemoveContainer" containerID="6668cc7b1dc4b0e7bad665616521e8c568611081902f7cc894ba5655f2e8bd96" Feb 27 19:44:31 crc kubenswrapper[4708]: E0227 19:44:31.229704 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-kvxg2_openshift-machine-config-operator(ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0)\"" pod="openshift-machine-config-operator/machine-config-daemon-kvxg2" podUID="ef5fca1b-6a30-4a3a-9bb5-c3840eac80d0" Feb 27 19:44:31 crc kubenswrapper[4708]: E0227 19:44:31.230902 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537024-84fb4" podUID="566d05c2-06e6-4418-b3ff-53d50f958eff" Feb 27 19:44:32 crc kubenswrapper[4708]: E0227 19:44:32.242695 4708 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"oc\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/openshift4/ose-cli:latest\\\"\"" pod="openshift-infra/auto-csr-approver-29537022-w5nd2" podUID="475b0d37-7a4b-42c9-bd95-721e346c8ea2"